Top Banner
505
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: New.features.guide.to.Sybase.ase.15
Page 2: New.features.guide.to.Sybase.ase.15

“Sybase has been dead a very long time.

May they rest in peace.”

Larry Ellison

CEO, Oracle Inc.

Page 3: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 4: New.features.guide.to.Sybase.ase.15

Several years later…

The Official NewThe Official NewFeatures Guide toFeatures Guide toSybase® ASE 15ASE 15

Brian Taylor, Naresh Adurty,Brian Taylor, Naresh Adurty,Steve Bradley, and Carrie KingSteve Bradley, and Carrie KingTaylor

with Mark A. Shelton and Jagan Reddywith Mark A. Shelton and Jagan Reddy

Wordware Publishing, Inc.Wordware Publishing, Inc.

Page 5: New.features.guide.to.Sybase.ase.15

Library of Congress Cataloging-in-Publication Data

Taylor, Brian.The official new features guide to Sybase ASE 15 / by Brian Taylor ... [et al.].

p. cm.Includes index.ISBN-13: 978-1-59822-004-9ISBN-10: 1-59822-004-7 (pbk.)1. Client/server computing. 2. Sybase. I. Taylor, Brian (Brian Robert), 1972- .QA76.9.C55O4 2006005.2'768--dc22 2005036445

© 2006, Wordware Publishing, Inc.

All Rights Reserved

2320 Los Rios BoulevardPlano, Texas 75074

No part of this book may be reproduced in any form or by any meanswithout permission in writing from Wordware Publishing, Inc.

Printed in the United States of America

ISBN-13: 978-1-59822-004-9

ISBN-10: 1-59822-004-7

10 9 8 7 6 5 4 3 2 1

0602

Sybase, Adaptive Server, and the Sybase Fibonacci symbol are registered trademarks of Sybase, Inc. in the

United States of America and/or other countries. Other brand names and product names mentioned in this book

are trademarks or service marks of their respective companies. Any omission or misuse (of any kind) of service

marks or trademarks should not be regarded as intent to infringe on the property of others. The publisher

recognizes and respects all marks used by companies, manufacturers, and developers as a means to distinguish

their products.

This book is sold as is, without warranty of any kind, either express or implied, respecting the contents of this

book and any disks or programs that may accompany it, including but not limited to implied warranties for the

book’s quality, performance, merchantability, or fitness for any particular purpose. Neither Wordware

Publishing, Inc. nor its dealers or distributors shall be liable to the purchaser or any other person or entity with

respect to any liability, loss, or damage caused or alleged to have been caused directly or indirectly by this book.

Portions of this book contain charts, graphs, tables, and other materials that are copyrighted by Sybase, Inc., and

are used with permission.

All inquiries for volume purchases of this book should be addressed to Wordware Publishing, Inc.,

at the above address. Telephone inquiries may be made by calling:

(972) 423-0090

Page 6: New.features.guide.to.Sybase.ase.15

SYBASE, INC., AND ITS SUBSIDIARIES DO NOT TAKE ANY

RESPONSIBILITY FOR THE CONTENT OF THE BOOK.

SYBASE DISCLAIMS ANY LIABILITY FOR INACCURACIES,

MISINFORMATION, OR ANY CONTENT CONTAINED IN, OR

LEFT OUT OF THIS BOOK.

Page 7: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 8: New.features.guide.to.Sybase.ase.15

Dedications

To my family, tons! — BRT

To Karen, Mom, and Dad — NA

To my best friend and wife, Carol — SWB

To my boys — CKT

vii

Page 9: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 10: New.features.guide.to.Sybase.ase.15

Contents

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . xxi

About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . xxv

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii

Part I: New Features Overview

Chapter 1 Exploring the Sybase Galaxy . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Part I — New Features Overview . . . . . . . . . . . . . . . . . . . 4

System Maintenance Improvements. . . . . . . . . . . . . . . . 4

Partition Management — Semantic Partitions . . . . . . . . . . 5

Scrollable Cursors . . . . . . . . . . . . . . . . . . . . . . . . . 5

Overview of Changes to Query Processing . . . . . . . . . . . . 6

Detection and Resolution of Performance Issues in Queries . . . 6

Computed Columns . . . . . . . . . . . . . . . . . . . . . . . . 7

Functional Indexes. . . . . . . . . . . . . . . . . . . . . . . . . 7

Capturing Query Processing Metrics . . . . . . . . . . . . . . . 8

Plan Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Sybase Software Asset Management (SySAM) 2.0 . . . . . . . . 9

Installation of ASE 15 . . . . . . . . . . . . . . . . . . . . . . . 9

Part II — Pre-15 Improvements . . . . . . . . . . . . . . . . . . . 10

Multiple tempdb Databases . . . . . . . . . . . . . . . . . . . 10

MDA Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Java and XML . . . . . . . . . . . . . . . . . . . . . . . . . . 11

The Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Sample Certification Exam. . . . . . . . . . . . . . . . . . . . 11

Use Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3, 2, 1, Contact! . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Chapter 2 System Maintenance Improvements . . . . . . . . . . . . . . . . . . . . . . 13

Recent Pre-ASE 15 Improvements . . . . . . . . . . . . . . . . . . 13

Multiple tempdb . . . . . . . . . . . . . . . . . . . . . . . . . 14

Native Data Encryption/Security Enhancements . . . . . . . . 14

Automatic Database Expansion . . . . . . . . . . . . . . . . . 15

The Basics . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Job Scheduler. . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Basic Components. . . . . . . . . . . . . . . . . . . . . . 18

Installation of Job Scheduler . . . . . . . . . . . . . . . . 18

ASE 15 Improvements . . . . . . . . . . . . . . . . . . . . . . . . 19

ix

Page 11: New.features.guide.to.Sybase.ase.15

Row Locked System Catalogs . . . . . . . . . . . . . . . . . . 19

Update Statistics . . . . . . . . . . . . . . . . . . . . . . . . . 20

Updates to Partition Statistics . . . . . . . . . . . . . . . . 20

Automatic Update Statistics . . . . . . . . . . . . . . . . . . . 22

Datachange . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Why Use datachange? . . . . . . . . . . . . . . . . . . . . 27

Datachange, Semantic Partitions, and Maintenance Schedules . 28

Local Indexes. . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

sp_helpindex . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Partition-level Utilities . . . . . . . . . . . . . . . . . . . . . . 33

Partition Configuration Parameters . . . . . . . . . . . . . 34

Utility Benefits from Semantic Partitions . . . . . . . . . . 35

Partition-specific Database Consistency Checks (dbccs) . . 35

Reorg Partitions . . . . . . . . . . . . . . . . . . . . . . . 38

Changes to the bcp Utility . . . . . . . . . . . . . . . . . . 39

Truncate Partitions . . . . . . . . . . . . . . . . . . . . . 43

Very Large Storage System. . . . . . . . . . . . . . . . . . . . . . 44

Disk Init . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Large Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . 45

Long Identifiers . . . . . . . . . . . . . . . . . . . . . . . 46

Short Identifiers . . . . . . . . . . . . . . . . . . . . . . . 46

Unicode Text Support. . . . . . . . . . . . . . . . . . . . . . . . . 47

New Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

New Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Deprecated Functions . . . . . . . . . . . . . . . . . . . . . . . . . 50

New Configuration Parameters . . . . . . . . . . . . . . . . . . . . 51

Eliminated Configuration Parameters. . . . . . . . . . . . . . . . . 52

New Global Variables. . . . . . . . . . . . . . . . . . . . . . . . . 52

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Chapter 3 Semantic Partitions and Very Large Database (VLDB) Support . . . . . . . . 53

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Why Partition Data? . . . . . . . . . . . . . . . . . . . . . . . 55

Benefits of Partitioning. . . . . . . . . . . . . . . . . . . . . . 56

Partition Terminology . . . . . . . . . . . . . . . . . . . . . . 57

Semantic Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Configuring ASE for Semantic Partitioning . . . . . . . . . . . 62

Partition Support in ASE 15 . . . . . . . . . . . . . . . . . . . 63

Partition Types . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Range Partitioning. . . . . . . . . . . . . . . . . . . . . . 65

Hash Partitioning . . . . . . . . . . . . . . . . . . . . . . 70

List Partitioning . . . . . . . . . . . . . . . . . . . . . . . 75

Round-robin Partitioning . . . . . . . . . . . . . . . . . . 78

x

Contents

Page 12: New.features.guide.to.Sybase.ase.15

Partitioning Strategies. . . . . . . . . . . . . . . . . . . . . . . . . 83

Inserting, Updating, and Deleting Data in Partitions . . . . . . . . . 84

Inserting Data into Semantic Partitions . . . . . . . . . . . . . 84

Inserting Data into Range Partitions . . . . . . . . . . . . . . . 84

Inserting Data into Hash Partitions. . . . . . . . . . . . . . . . 86

Inserting Data into List Partitions . . . . . . . . . . . . . . . . 86

Deleting Data from All Semantic Partitions . . . . . . . . . . . 86

Updating Data in All Semantic Partitions . . . . . . . . . . . . 86

Built-in Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Data Partition Implementation and Upgrade Strategies . . . . . . . 89

Index Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Local Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

Clustered Prefixed Index on Range Partitioned Table . . . 95

Clustered Non-Prefixed Index on Range

Partitioned Table . . . . . . . . . . . . . . . . . . . . . 97

Clustered Prefixed Index on List Partitioned Table . . . . . 99

Clustered Non-Prefixed Index on List

Partitioned Table . . . . . . . . . . . . . . . . . . . . 101

Clustered Prefixed Index on Round-robin

Partitioned Table . . . . . . . . . . . . . . . . . . . . 104

Clustered Non-Prefixed Index on Round-robin

Partitioned Table . . . . . . . . . . . . . . . . . . . . 106

Clustered Non-Prefixed Index on Hash

Partitioned Table . . . . . . . . . . . . . . . . . . . . 108

Clustered Prefixed Index on Hash Partitioned Table . . . 110

Global Index . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Global Nonclustered Prefixed Index on Range

Partitioned Table . . . . . . . . . . . . . . . . . . . . 114

Global Nonclustered Prefixed Index on List

Partitioned Table . . . . . . . . . . . . . . . . . . . . 116

Global Nonclustered Prefixed Index on

Round-robin Partitioned Table . . . . . . . . . . . . . 118

Global Nonclustered Prefixed Index on Hash

Partitioned Table . . . . . . . . . . . . . . . . . . . . 120

Query Processor and Partition Support . . . . . . . . . . . . . . . 122

ASE 15 Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Partition Maintenance . . . . . . . . . . . . . . . . . . . . . . . . 124

Altering Data Partitions. . . . . . . . . . . . . . . . . . . . . 124

Unpartition a Table. . . . . . . . . . . . . . . . . . . . . 125

Change the Number of Partitions . . . . . . . . . . . . . 126

Add a Partition to a Table . . . . . . . . . . . . . . . . . 126

Drop Partitions . . . . . . . . . . . . . . . . . . . . . . . 130

Modifications to the Partition Key . . . . . . . . . . . . . 131

Partition Information . . . . . . . . . . . . . . . . . . . . . . . . 134

Contents

xi

Page 13: New.features.guide.to.Sybase.ase.15

Influence of Partitioning on DBA Activities . . . . . . . . . . . . 143

Influence of Partitioning on Long-time Archival . . . . . . . . . . 143

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

Chapter 4 Scrollable Cursors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

Scrollable Cursor Background. . . . . . . . . . . . . . . . . . . . 146

Cursor Scrollability . . . . . . . . . . . . . . . . . . . . . . . . . 146

Cursor-related Global Variables . . . . . . . . . . . . . . . . . . . 148

Changes to the sp_cursorinfo System Procedure . . . . . . . . . . 150

Be Aware of Scrollable Cursor Rules! . . . . . . . . . . . . . . . 151

Cursor Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Demonstration 1: Update to a Row Already Fetched. . . . . . 154

Demonstration 2: Update to a Row Not Yet Fetched. . . . . . 156

Cursor Sensitivity — An Exception . . . . . . . . . . . . . . 157

Locking Considerations with Cursors . . . . . . . . . . . . . 158

Impact on tempdb Usage . . . . . . . . . . . . . . . . . . . . 159

Worktable Materialization with Scrollable Sensitive

Cursors. . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

Conclusion of Sensitive vs. Insensitive Cursors . . . . . . . . 163

Sybase Engineer’s Insight . . . . . . . . . . . . . . . . . . . . . . 164

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Future Direction. . . . . . . . . . . . . . . . . . . . . . . . . 165

Chapter 5 Overview of Changes to the Query Processing Engine. . . . . . . . . . . . 167

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

Optimization Goals . . . . . . . . . . . . . . . . . . . . . . . . . 168

allrows_oltp . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

allrows_mix . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

allrows_dss . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

Determining the Current Optimization Goal . . . . . . . . . . 170

Optimization Criteria . . . . . . . . . . . . . . . . . . . . . . . . 170

merge_join . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

merge_union_all . . . . . . . . . . . . . . . . . . . . . . . . 171

merge_union_distinct . . . . . . . . . . . . . . . . . . . . . . 171

multi_table_store_ind . . . . . . . . . . . . . . . . . . . . . . 171

opportunistic_distinct_view . . . . . . . . . . . . . . . . . . 171

parallel_query . . . . . . . . . . . . . . . . . . . . . . . . . . 171

hash_join . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

Optimization Timeout Limit. . . . . . . . . . . . . . . . . . . . . 172

Query Processor Improvements . . . . . . . . . . . . . . . . . . . 174

Datatype Mismatch . . . . . . . . . . . . . . . . . . . . . . . 175

Partition Elimination and Directed Joins . . . . . . . . . . . . 177

Tables with Highly Skewed Histogram Values. . . . . . . . . 179

Group By and Order By . . . . . . . . . . . . . . . . . . . . 181

xii

Contents

Page 14: New.features.guide.to.Sybase.ase.15

or Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

Star Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

Chapter 6 Detection and Resolution of Query Performance Issues . . . . . . . . . . . 187

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

An Approach to Poor Query Performance Diagnosis . . . . . . . . 188

Common Query Performance Factors . . . . . . . . . . . . . . . . 190

Eliminating Causes for Sub-Optimal Plan Selection . . . . . . 191

Find Missing or Invalid Statistics . . . . . . . . . . . . . 191

Consider Range Cell Density on Non-Unique Indexes . . 191

Identify Index Needs . . . . . . . . . . . . . . . . . . . . 192

Identify Poor Index Strategy . . . . . . . . . . . . . . . . 192

Fragmentation of Data . . . . . . . . . . . . . . . . . . . 192

Resolve Partition Imbalance . . . . . . . . . . . . . . . . 193

Reset Server- or Session-level Options . . . . . . . . . . 193

Overengineered Forceplan . . . . . . . . . . . . . . . . . 194

Invalid Use of Index Force. . . . . . . . . . . . . . . . . 194

Inefficient Query Plan Forced by Abstract Plan . . . . . . 195

Query Processor “set options” — The Basics . . . . . . . . . . . . 195

Query Optimizer Cost Algorithm. . . . . . . . . . . . . . . . 198

ASE 15 vs. 12.5.x Cost Algorithm . . . . . . . . . . . . . . . 199

Query Processor “set options” — Explored . . . . . . . . . . . . . 200

show_missing_stats . . . . . . . . . . . . . . . . . . . . . . . 200

show_elimination . . . . . . . . . . . . . . . . . . . . . . . . 203

show_abstract_plan . . . . . . . . . . . . . . . . . . . . . . . 204

Why Use Abstract Plans for ASE 15? . . . . . . . . . . . . . 207

Application of Optimization Tools . . . . . . . . . . . . . . . . . 208

Optimization Goal Performance Analysis . . . . . . . . . . . 208

Optimization Criteria Performance Analysis . . . . . . . . . . 210

Optimization Timeout Analysis . . . . . . . . . . . . . . . . 212

Suggested Approach to Fix Optimization

Timeout Problems . . . . . . . . . . . . . . . . . . . 216

Detection, Resolution, and Prevention of Partition-related

Performance Issues . . . . . . . . . . . . . . . . . . . . . . . . 217

Data Skew Due to Incorrect Partition Type or

Poor Partition Key Selection . . . . . . . . . . . . . . . . 218

Effect of Invalid Statistics on Table Semantically

Partitioned . . . . . . . . . . . . . . . . . . . . . . . . . . 220

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

Chapter 7 Computed Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

Key Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

Materialization . . . . . . . . . . . . . . . . . . . . . . . . . 226

Contents

xiii

Page 15: New.features.guide.to.Sybase.ase.15

Deterministic Property . . . . . . . . . . . . . . . . . . . . . 228

Relationship between Deterministic Property and

Materialization. . . . . . . . . . . . . . . . . . . . . . . . 229

Deterministic and Materialized Computed Columns . . . 229

Deterministic and Nonmaterialized Computed

Columns. . . . . . . . . . . . . . . . . . . . . . . . . 229

Nondeterministic and Materialized Computed

Columns. . . . . . . . . . . . . . . . . . . . . . . . . 230

Nondeterministic and Nonmaterialized Computed

Columns. . . . . . . . . . . . . . . . . . . . . . . . . 230

Benefits of Using Computed Columns . . . . . . . . . . . . . . . 231

Provide Shorthand and Indexing for an Expression . . . . . . 231

Composing and Decomposing Datatypes. . . . . . . . . . . . 231

User-defined Sort Order . . . . . . . . . . . . . . . . . . . . 232

Rules and Properties of Computed Columns . . . . . . . . . . . . 233

Sybase Enhancements to Support Computed Columns . . . . . . . 235

Create Table Syntax Change . . . . . . . . . . . . . . . . . . 235

Alter Table Syntax Change . . . . . . . . . . . . . . . . . . . 235

System Table Changes . . . . . . . . . . . . . . . . . . . . . 236

Stored Procedure Changes . . . . . . . . . . . . . . . . . . . 237

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

Chapter 8 Functional Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

Computed Column Index . . . . . . . . . . . . . . . . . . . . . . 242

Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

Rules and Properties of a Computed Column Index . . . . . . 246

Feature Benefits. . . . . . . . . . . . . . . . . . . . . . . . . 246

Feature Limitations . . . . . . . . . . . . . . . . . . . . . . . 248

Impacts to tempdb . . . . . . . . . . . . . . . . . . . . . . . 248

Impact to Existing Application Code . . . . . . . . . . . . . . 249

Determining When to Use a Computed Column Index. . . . . 250

Optimizer Statistics . . . . . . . . . . . . . . . . . . . . . . . 251

Function-based Index . . . . . . . . . . . . . . . . . . . . . . . . 251

Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

Rules and Properties of a Function-based Index . . . . . . . . 252

Feature Benefits. . . . . . . . . . . . . . . . . . . . . . . . . 253

Feature Limitations . . . . . . . . . . . . . . . . . . . . . . . 256

Impacts to tempdb . . . . . . . . . . . . . . . . . . . . . . . 256

Impact to Existing Application Code . . . . . . . . . . . . . . 257

Determining the Use of a Function-based Index . . . . . . . . 257

Optimizer Statistics . . . . . . . . . . . . . . . . . . . . . . . 258

Behind the Scenes . . . . . . . . . . . . . . . . . . . . . . . . . . 258

Getting Index Information . . . . . . . . . . . . . . . . . . . . . . 258

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

xiv

Contents

Page 16: New.features.guide.to.Sybase.ase.15

Chapter 9 Capturing Query Processing Metrics . . . . . . . . . . . . . . . . . . . . . 261

Alternatives to Query Processing Metrics . . . . . . . . . . . . . . 261

Introduction to Query Processing Metrics. . . . . . . . . . . . . . 262

Contents of sysquerymetrics. . . . . . . . . . . . . . . . . . . . . 263

Contents of the sysquerymetrics View . . . . . . . . . . . . . 264

How to Enable QP Metrics Capture . . . . . . . . . . . . . . . . . 265

Captured Information Explored . . . . . . . . . . . . . . . . . . . 266

Stored Procedures. . . . . . . . . . . . . . . . . . . . . . . . 266

Triggers and Views . . . . . . . . . . . . . . . . . . . . . . . 270

Execute Immediate . . . . . . . . . . . . . . . . . . . . . . . 270

Accessing Captured Plans . . . . . . . . . . . . . . . . . . . . . . 271

How Is the QP Metrics Information Useful? . . . . . . . . . . . . 273

Identification of Performance Regression . . . . . . . . . . . . . . 276

Comparing Metrics for a Specific Query between

Running Groups . . . . . . . . . . . . . . . . . . . . . . . . . 277

Comparing Metrics for All Queries between Running

Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

Why Separate the QP Metrics Data by gid? . . . . . . . . . . 280

Syntax Style Matters; Spacing Does Not . . . . . . . . . . . . . . 281

Clearing and Saving the Metrics. . . . . . . . . . . . . . . . . . . 283

Relationship between Stats I/O and QP Metrics I/O Counts . . . . 284

Information for Resource Governor . . . . . . . . . . . . . . . . . 285

Space Utilization Considerations . . . . . . . . . . . . . . . . . . 285

Limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286

Chapter 10 Graphical Plan Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287

Graphical Plan Viewer from Interactive SQL . . . . . . . . . . . . 287

Graphical Query Tree Using Set Options . . . . . . . . . . . . . . 294

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296

Chapter 11 Sybase Software Asset Management (SySAM) 2.0. . . . . . . . . . . . . . 297

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

Prior to ASE 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . 298

With ASE 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

Components of Asset Management . . . . . . . . . . . . . . . . . 299

SySAM Server . . . . . . . . . . . . . . . . . . . . . . . . . 299

SySAM Utility Program — lmutil . . . . . . . . . . . . . . . 300

SySAM Reporting Tool. . . . . . . . . . . . . . . . . . . . . 300

System Environment Variables . . . . . . . . . . . . . . . . . 301

License File . . . . . . . . . . . . . . . . . . . . . . . . . . . 302

Options File . . . . . . . . . . . . . . . . . . . . . . . . . . . 303

Properties File. . . . . . . . . . . . . . . . . . . . . . . . . . 303

The SySAM Environment . . . . . . . . . . . . . . . . . . . 304

Standalone License Server . . . . . . . . . . . . . . . . . 304

xv

Contents

Page 17: New.features.guide.to.Sybase.ase.15

Networked License Server . . . . . . . . . . . . . . . . . 305

Redundant License Server . . . . . . . . . . . . . . . . . 305

Acquiring Product Licenses . . . . . . . . . . . . . . . . . . . . . 306

Product Licenses. . . . . . . . . . . . . . . . . . . . . . . . . . . 309

Try and Buy. . . . . . . . . . . . . . . . . . . . . . . . . . . 309

License Activation . . . . . . . . . . . . . . . . . . . . . . . 309

SySAM Administration . . . . . . . . . . . . . . . . . . . . . . . 310

sp_lmconfig . . . . . . . . . . . . . . . . . . . . . . . . . . . 310

ASE 15 SySAM Upgrade Process. . . . . . . . . . . . . . . . . . 312

SySAM Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 313

Summary Reports . . . . . . . . . . . . . . . . . . . . . . . . 313

Server Usage Reports . . . . . . . . . . . . . . . . . . . . . . 319

Raw Data Reports. . . . . . . . . . . . . . . . . . . . . . . . 320

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321

Chapter 12 Installation of ASE Servers . . . . . . . . . . . . . . . . . . . . . . . . . . 323

Prior to Installation for All Methods . . . . . . . . . . . . . . . . 324

Installation with Resource Files . . . . . . . . . . . . . . . . . . . 325

Notes for Resource File Installation of ASE . . . . . . . . . . 325

Installation of ASE Components with a Resource File . . . . . 330

GUI Installation Method with srvbuild Executable . . . . . . . . . 332

Installation with the Dataserver Executable . . . . . . . . . . . . . 352

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356

Part II: Pre-15 Improvements

Chapter 13 Multiple Temporary Databases . . . . . . . . . . . . . . . . . . . . . . . . 359

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359

Purposes for Multiple Temporary Databases . . . . . . . . . . . . 360

Prior to ASE 15 . . . . . . . . . . . . . . . . . . . . . . . . . 360

With ASE 15 . . . . . . . . . . . . . . . . . . . . . . . . . . 361

System Catalog Changes. . . . . . . . . . . . . . . . . . 361

directio Support . . . . . . . . . . . . . . . . . . . . . . 361

update statistics . . . . . . . . . . . . . . . . . . . . . . 363

Insensitive Scrollable Cursors . . . . . . . . . . . . . . . 364

Semi-sensitive Scrollable Cursors . . . . . . . . . . . . . 364

Sensitive Scrollable Cursors . . . . . . . . . . . . . . . . 364

How to Decide When to Add a Temporary Database . . . . . . . . 365

Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365

What Are Your Needs? . . . . . . . . . . . . . . . . . . . . . 366

Implementation Steps . . . . . . . . . . . . . . . . . . . . . . . . 367

Determining Available Temporary Databases. . . . . . . . . . . . 368

Sample Setup for Temporary Database for “sa” Use Only . . . . . 369

Other Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371

xvi

Contents

Page 18: New.features.guide.to.Sybase.ase.15

Dropping Temporary Databases . . . . . . . . . . . . . . . . 371

Altering a Temporary Database. . . . . . . . . . . . . . . . . 372

@@tempdb . . . . . . . . . . . . . . . . . . . . . . . . . . . 372

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372

Chapter 14 The MDA Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

What Are the MDA Tables? . . . . . . . . . . . . . . . . . . . . . 373

Past Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374

MDA Table Installation . . . . . . . . . . . . . . . . . . . . . . . 376

MDA Table Server Configuration Options . . . . . . . . . . . . . 377

The Parent Switch. . . . . . . . . . . . . . . . . . . . . . . . 379

The MDA Tables . . . . . . . . . . . . . . . . . . . . . . . . . . 380

Changes from ASE 12.5.3 . . . . . . . . . . . . . . . . . . . . . . 382

What Is Meant by “stateful” Tables? . . . . . . . . . . . . . . . . 383

Stateful MDA Table Data Management . . . . . . . . . . . . . . . 385

SQL Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

Useful MDA Table Queries . . . . . . . . . . . . . . . . . . . . . 391

MDA Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . 393

Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393

Chapter 15 Java, XML, and Web Services in ASE. . . . . . . . . . . . . . . . . . . . . 395

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396

Java in the Database . . . . . . . . . . . . . . . . . . . . . . . . . 396

Installing Java Classes . . . . . . . . . . . . . . . . . . . . . 397

Creating Java Classes and JARs . . . . . . . . . . . . . . 397

Using the installjava Utility . . . . . . . . . . . . . . . . 398

Configuring Memory for Java in the Database . . . . . . 398

Java Classes as Datatypes. . . . . . . . . . . . . . . . . . . . 399

An Example of Table Definition Using a Java Class . . . 400

Performance Considerations . . . . . . . . . . . . . . . . 400

An Example of Using a Java Class within a Select . . . . 400

Executing Java Methods . . . . . . . . . . . . . . . . . . . . 400

Class Static Variables . . . . . . . . . . . . . . . . . . . 401

Recommendations and Considerations . . . . . . . . . . . . . 401

XML in the Database . . . . . . . . . . . . . . . . . . . . . . . . 402

XML Stored in the Database . . . . . . . . . . . . . . . . . . 402

Option 1: Store the XML Document into a

Text Datatype . . . . . . . . . . . . . . . . . . . . . . 403

Option 2: Store the XML Document into an

Image Datatype Using xmlparse . . . . . . . . . . . . 404

Option 3: Store the XML Document into an

Image Datatype Using Compression . . . . . . . . . . 404

Option 4: Store the XML Document Outside

the Database. . . . . . . . . . . . . . . . . . . . . . . 405

HTML Stored in the Database . . . . . . . . . . . . . . . 406

Contents

xvii

Page 19: New.features.guide.to.Sybase.ase.15

Recommendations and Considerations . . . . . . . . . . 406

Performance and Sizing . . . . . . . . . . . . . . . . . . 407

SQL Result Sets Converted to Return an XML Document. . . 410

Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411

Web Services Producer . . . . . . . . . . . . . . . . . . . . . 411

Web Services Consumer . . . . . . . . . . . . . . . . . . . . 413

Recommendations and Considerations . . . . . . . . . . 415

Appendix A Sybase ASE 15 Certification Sample Questions and Answers . . . . . . . 417

Appendix B Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467

xviii

Contents

Page 20: New.features.guide.to.Sybase.ase.15

Foreword

Sybase Adaptive Server Enterprise (ASE) has been a leading

RDBMS for mission-critical applications for almost two decades.

With ASE 15, Sybase has continued the tradition of introducing

leading edge database technologies to address our customers’ needs.

These technologies cover security, unstructured data management,

and operational intelligence. ASE 15 provides significant improve-

ments for DSS and OLTP applications thanks to innovative query

processing technology. The result is a clear understanding of the opti-

mal usage of the features and functionality.

The Official New Features Guide to Sybase ASE 15 is the first

book describing ASE 15. Of particular interest are details about new

features such as semantic partitions, computed columns, function

indexes, and scrollable cursors. The book is valuable also for its guid-

ance for diagnosing, resolving, and optimizing the overall system

performance.

The authors collectively have more than 40 years of experience

with Sybase ASE as DBAs at some of Sybase’s largest and most

demanding customers. Their background has enabled them to create

a book with great practical value.

The authors’ material is presented in a way that is useful to read-

ers independent of their experience level with ASE. The Official New

Features Guide to Sybase ASE 15 will be an invaluable asset to

DBAs, developers, and consultants working with Sybase ASE.

Dr. Raj Nathan

Senior Vice President

Information Technology Solutions Group

Sybase, Inc.

xix

Page 21: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 22: New.features.guide.to.Sybase.ase.15

Acknowledgments

The authors would like to thank Celeste Noren for starting this whole

process. Your continual support and upbeat attitude have allowed us

to stay focused on the job at hand. We thank Joe Pearl for his tireless

and thorough editing and brilliant input. Special thanks to Irfan Khan

at Sybase Engineering for providing us with early access to the new

features and documentation for ASE 15. We need to thank

Rob Verschoor, Mark Kusma, and Jeff Tallman of Sybase for their

thorough technical reviews and timely feedback. We thank Joan

Fronske and Tom Traubitz from Sybase Marketing for making all

the arrangements with Sybase. We would like to thank Tim McEvoy

at Wordware Publishing for getting this book put together in such a

timely manner. A special thanks goes to Dean Kent for his legal

counsel during the process and for all the good lunches. Mostly,

we would like to thank our families for being understanding and

gracious.

xxi

Page 23: New.features.guide.to.Sybase.ase.15

I would like to start by recognizing the talented and dedicated group

of authors — Steve Bradley, Naresh Adurty, and Carrie King Taylor

— who accepted my challenge to create this book about the new fea-

tures for Sybase ASE 15. With all of the personal challenges and

commitments we each faced during the development of this book,

I’m even more pleased with the final product. As our careers diverge

in the future, I’m sure each of us will fondly recall the many chal-

lenges, good times, and late nights spent on the development of this

book.

Next, I want to personally thank the many Sybase employees

who helped with this book in many capacities: Celeste Noren, Joe

Pearl, Rob Verschoor, Irfan Khan, Jeff Tallman, Mark Kusma, Joan

Fronske, and Tom Traubitz.

I also need to thank the many friends and family members in my

life: my parents, Art and Nancy, my brother, Grant, Aunt Soozie,

Frank, Carol, and my grandfather Robert Taylor, who left this world

during the writing of this book. I also want to thank some of my

friends who have made life very interesting and with whom I have

shared many of life’s milestones: Steve, Daryl, Dave, Chris, and

Mike.

I owe the biggest thanks to my family, especially my wife and

co-author, Carrie. I appreciate how you are able to keep me moti-

vated, and still find time for our boys and your contributions to this

book. You are the reason and motivation for my success. I sure love

you! I also owe many great big hugs and kisses to my two big boys,

Elliot and Jonah, for giving up their daddy for many weekends and

nights. Each of you did your best to remind me of how important it is

to spend quality time with family, despite the many demands of

authorship.

— BRT

First, I would like to thank Brian for considering me talented enough

to be a contributing author and my poor judgment that made me

agree to his request. When Brian approached me with this idea, I had

just been married for a month to my wonderful and naïve wife,

Karen, and was about to start my Executive MBA from Duke, for

which I had to fly from Florida to Raleigh every other weekend. My

mind envisioned mansions and big boats following the success of this

xxii

Acknowledgments

Page 24: New.features.guide.to.Sybase.ase.15

book. After talking to the publisher, I realized that we will not be

anywhere close to buying a boat. We will be lucky to be able to buy a

life jacket with the new riches. By that time, it was too late to say no.

I would like to thank Karen for supporting me, at least outwardly,

throughout this madness; my parents, for giving me the freedom and

courage to make insane decisions and for constantly reminding me

that I am the cutest kid in the world and thus stroking my fragile ego;

and my failed stand-up comedy career, which forced me to continue

my career as a Sybase DBA for the past decade.

— NA

First, I would like to thank Brian for taking on this initiative and con-

sidering me to be an author in developing this work. Second, I would

like to thank the other authors for the time and patience that each of

you have shown during this process. As difficult as it is for one per-

son to produce such a work, four people can make it even more

strenuous. Thank you, guys, for keeping the process lighthearted and

fun. It was a pleasure working with you. I would like to thank my

parents, Charlie and Betty, for instilling in me the desire and pride to

do the best I can in whatever I attempt. And most importantly, I

would like to thank my best friend for all of her support during this

time — my wife and my love, Carol.

— SWB

I would like to recognize my co-authors Brian, Naresh, and Steve.

You guys rock. Special thanks go to Carol and Karen for sacrificing

your family time so we could complete this book. I’d like to

acknowledge my dad (posthumously), my mom, and my siblings

(“The Chills”…eyes loaf) for making me the person I am today.

Without them, I might have turned out ordinary. A special thanks to

my friends Linda, Katrina, Colleen, Gina, Heather, Chris, Daryl,

Terri, Allie, and September for providing so much fun and inspiration

during this process. Most importantly, I’d like to thank my husband,

Brian, and wonderful sons, Elliot and Jonah. You are the lights of my

life. I love you more, more, more, more, more!

— CKT

Acknowledgments

xxiii

Page 25: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 26: New.features.guide.to.Sybase.ase.15

About the Authors

Brian Taylor is a Sybase Certified Professional DBA

with over 11 years of experience in the Information Tech-

nology (IT) industry. Brian has been a presenter at the

Sybase TechWave conference. He has also presented an

ASE 15 case study through a live web seminar, as well as

delivered web seminars based on his beta testing efforts

with ASE 15. He is a contributing author to Wordware’s

Administrator’s Guide to Sybase ASE 12.5. Brian has a

BS in management information systems and finance from

Florida State University and is currently pursuing his

MBA from Florida State University.

Naresh Adurty has been working with Sybase for the

past 12 years. He has an MS in mechanical engineering

from Oklahoma State University and is currently pursuing

his MBA from Duke’s Fuqua School of Business. He is

also a professional stand-up comedian with radio and TV

credits including HBO.

Steve Bradley is a Brainbench Certified Sybase DBA

with over 25 years of experience in the IT field in the

areas of data center operations, application develop-

ment, MVS administration, database administration,

and vendor technical sales support. Steve has worked

with Sybase products for over 12 years and has been a

presenter at the 2004 and 2005 Sybase TechWave con-

ferences. Steve has a BS in computer science from the

University of South Carolina and an MBA from Uni-

versity of Phoenix, Tampa.

xxv

Page 27: New.features.guide.to.Sybase.ase.15

Carrie King Taylor has been working with Sybase as a

DBA for 8 years with over 15 years of experience in the

IT industry. Carrie has a diverse IT background as she has

served as a Unix systems administrator, software quality

analyst, and software developer, and has worked in soft-

ware configuration and implementation. She has a BS in

business information systems from Indiana Wesleyan

University.

About the Contributing AuthorsMark A. Shelton is an enterprise architect with Nielsen Media

Research. He has more than 25 years of experience in the IT industry

and has been working with Sybase technologies since 1993. His work

positions have included a DBA, developer, business analyst, project

manager, and architect. Mark holds a BS in computer science and a

BS in electrical engineering from the University of Dayton, and an

MBA from the University of South Florida.

Jagan Reddy is a certified professional senior DBA at America

Online, Inc. (AOL). He has more than 15 years of database adminis-

tration experience, and has been a presenter at the Sybase TechWave

conference. He started using Sybase technologies in 1989 as a Sybase

DBA. He has spent most of his career as a DBA, project lead, and

system administrator. He holds a master’s degree in plant genetics

and plant breeding from Andhra Pradesh Agricultural University in

India, and a master’s degree in computer science from Bowie State

University in Maryland.

xxvi

About the Authors

Page 28: New.features.guide.to.Sybase.ase.15

Introduction

We’ve read the newsgroups. We’ve seen the blogs. We know you’re

out there. The talented Sybase database administrators, developers,

managers, and users are out there. This book is for you.

We will attempt to guide you, the reader, through the enhance-

ments of ASE 15. In addition to the enhancements, we will also

discuss how these enhancements can be applied to a “real-life” situa-

tion. We have polled many fellow database administrators about their

thoughts on Sybase ASE regarding the new features. We have tried to

incorporate as many of these ideas as possible.

Audience

The audience for this book is wider than just database administrators.

This book can be used by managers to foresee possible project direc-

tion for more than just the next release of a product or application. It

can assist the system architect in determining the best solution for

enterprise applications. It can be used to educate the developer about

features that are now available that will make the development pro-

cess easier and more manageable. But most importantly, this book is

for the database administrator. This book will guide you through the

practical use of the new features in Sybase ASE 15.

Scope

This book is an overview of the new features available in ASE 15.

We also touch on some existing features that were implemented since

the last series of administration guides was released. These preexist-

ing features were deemed worthy of reference based on the overall

ASE road map for future releases and the direction of Sybase toward

a flexible product geared toward service-oriented architecture (SOA).

We will demonstrate and clarify the new features and how they inte-

grate into existing systems.

xxvii

Page 29: New.features.guide.to.Sybase.ase.15

Conventions

This book follows certain typographic conventions, as outlined

below:

Arial font is used for commands, options, utility names, and

other keywords within the text

Italics is used to show generic variables and options; these

should be replaced when actually writing code

Constant width is used to show syntax, the contents of files, and

the output from commands

%, $ are used in some examples as the operating system

prompt

[ ] surround optional elements in a description of syntax

(the brackets themselves should never be typed)

{ } indicates you must choose at least one of the

enclosed options

( ) are part of the command

| is used in syntax descriptions to separate items for

which only one alternative may be chosen at a time

, indicates as many of the options shown can be

selected but must be separated by commas.

… indicates the previous option is repeatable

An example of these conventions is:

update statistics table_name

[[partition data_partition_name] [(column_list)] |

index_name [partition index_partition_name]]

[using step values]

[with consumers = consumers] [,sampling=N percent]

xxviii

Introduction

Page 30: New.features.guide.to.Sybase.ase.15

A final word about syntax: No capitalization of objects, databases, or

logins/user were used where syntax would be incorrect and would

cause errors. Sentences starting with keywords may start with

lowercase letters. Code and output are presented separately from the

regular paragraph.

Disclaimer

Some of this material is based upon a prereleased version of ASE 15.

Some of the details, syntax, and output of some of the commands and

procedures may differ slightly from the General Availability (GA)

release of ASE 15.

While the authors have attempted to ensure the screen shots,

syntax, and output reflect the GA release of ASE 15, it is possible the

book may contain some information from the prereleased version of

ASE 15, which may or may not be materially different from the GA

release of ASE 15.

Introduction

xxix

Page 31: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 32: New.features.guide.to.Sybase.ase.15

Part I

New Features OverviewNew Features Overview

1

Page 33: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 34: New.features.guide.to.Sybase.ase.15

Chapter 1

Exploring the Sybase Galaxy

The times, they are a-changin’. This observation applies to relational

database management systems as well. Historically, databases have

revolved around structured data and how it is managed. The database

administrator must now look beyond managing structured data and

must now consider managing information.

Sybase rightly code-named the latest release of Sybase Adaptive

Server Enterprise (ASE) “Galaxy.” We are moving beyond the world

of data management as we know it and rocketing into the new galaxy

of information management.

What is the difference between data and information? Data is

simply the atomic component of information, whereas information is

taking the data and making it mean something. As we begin to open

our silos and release the data within, that data can then become useful

information to everyone in our individual galaxies.

What does this mean to the database administrator? It means sup-

porting more complex queries for greater flexibility and operational

scalability. It also means looking beyond our previous ideals of data

storage and broadening our concepts to the possibilities of larger

amounts of data and faster access.

Let’s take a look at the new features available in ASE 15 as we

begin to explore the galaxy of information management. Addi-

tionally, our journey will cover a few of the recent pre-ASE 15

enhancements and explore how they enhance and expand the ASE

product.

Chapter 1: Exploring the Sybase Galaxy | 3

Page 35: New.features.guide.to.Sybase.ase.15

Part I — New Features OverviewUnlike previous releases, the release of ASE 15 incorporates sweep-

ing changes to the ASE product. Many of these changes are designed

to increase the database server’s ability to handle a high volume and

wide variety of data while responding with greater velocity.

Today’s database servers must have the ability to handle an

increasing load of information. Database administrators are given the

responsibility of balancing the responsiveness of systems and the

availability of data. ASE 15 concurrently assists the database admin-

istrator with both of these often conflicting objectives.

ASE 15 arrives with internal enhancements to keep data moving

at the highest possible velocity while minimizing the server’s need

for performance tweaks, thus lowering the total cost of ownership

(TCO). Features such as automatically logged execution metrics

assist the database administrator in lowering the cost of development

while increasing the ability to address more complex data. This chap-

ter gives a brief overview of all of the new features of ASE 15.

System Maintenance Improvements

To achieve the goals of simplifying the maintenance of ASE as well

as enhancing the performance of query processing, several system

maintenance enhancements are introduced in the ASE 15 release.

Included in these changes is the ability to automatically update statis-

tics using the datachange function in conjunction with Job

Scheduler, the expansion of row-level locking to system tables, and

the introduction of partition-based utilities and maintenance tools.

The system administration enhancements in ASE 15 are designed

to lower the maintenance requirements of ASE, enabling the database

administrator to target maintenance efforts on areas of ASE that truly

need attention. Several of the changes allow the database administra-

tor to reduce or altogether close the regularly scheduled maintenance

windows where full system maintenance is performed.

Leading up to the ASE 15 release, many new features were

added to the ASE 12.5.x versions of ASE that work hand in hand

with the enhancements included in ASE 15. In Chapter 2, we discuss

the following pre-ASE 15 topics: multiple tempdb, automatic data-

base expansion, and the Job Scheduler, and how these features work

in conjunction with enhancements in ASE 15 to lower TCO.

4 | Chapter 1: Exploring the Sybase Galaxy

Page 36: New.features.guide.to.Sybase.ase.15

Partition Management — Semantic Partitions

Sybase ASE semantic partitions address two main issues with today’s

storage systems: data velocity and data explosion. Today, systems

demand access to more data, and with increasing velocity. These

goals are often contradictory. The introduction of Sybase ASE 15

semantic partitions simultaneously assists with each of these goals

through the physical separation of data into partitions.

The storage of current and historic data in the same database

adds to the data management complexities already inherent in most

environments. Semantic partitions can decrease the difficulty of man-

aging these complexities. Semantic partitions provide a mechanism

for allowing access to this data through the same interface, with no

extraneous processing modifications to retrieve archived or historical

data. Semantic partitions also introduce partition elimination. With

semantic partitions, the optimizer is able to choose which partitions

are necessary to resolve the query and therefore eliminate unneces-

sary data access.

The concept of data partitions is not new to Sybase ASE; how-

ever, the concept of semantic partitions is new. Semantic partitions

allow the database administrator to maintain control over the place-

ment of partitioned data and indexes within the database. This ability

was not present in prior releases of Sybase ASE.

Chapter 3 elaborates on the advantages provided by semantic

partitions. Additionally, the chapter presents an approach to best

match the appropriate semantic partition type with a system’s data

and usage requirements. The chapter specifies the differences

between partition types as well as the new terminology introduced in

support of this new functionality.

Scrollable Cursors

Scrollable cursors are in line with Sybase’s goal to increasingly lower

the TCO of Sybase ASE. Scrollable cursors can also ease or speed

the development process by replacing supplemental middleware or

client tools used to manipulate a cached result set.

Chapter 4 describes how to create a scrollable cursor, explains

how to manipulate a scrollable cursor, and discusses the global vari-

ables associated with a scrollable cursor.

Chapter 1: Exploring the Sybase Galaxy | 5

Page 37: New.features.guide.to.Sybase.ase.15

Additionally, Sybase ASE 15 introduces cursor sensitivity. Cur-

sor sensitivity relates to the ability of ASE to allow changes to data

underlying a cursor to be represented in the cursor’s result set. Cursor

sensitivity also allows for a cursor performance benefit, which is

explained within this chapter.

Overview of Changes to Query Processing

In ASE 15, the Query Optimizer has undergone numerous internal

changes. These changes are intended to accomplish the following

objectives:

� Improve query performance

� Limit the need for manual intervention to achieve optimal query

plans

� Ease the monitoring and diagnostic efforts needed to maintain

query performance

Chapter 5 discusses the “out-of-the-box” improvements to the ASE

15 Query Optimizer. These Query Optimizer changes will lower

TCO by requiring less tuning. By simply running existing queries

through the new Query Optimizer, the performance can be improved.

Additionally, new optimization goals are explored in conjunction

with the optimization time limitations. These time limitations will

provide the database administrator the ability to specify the percent-

age of time the Query Optimizer is allowed to spend on query

optimization.

Detection and Resolution of Performance

Issues in Queries

An ASE 15 goal is to ease the monitoring and diagnostic efforts

required by the database administrator to scrutinize query perfor-

mance. As discussed in Chapter 5, ASE 15’s Query Optimizer, or

optimizer, contains “out-of-the-box” improvements designed to result

in little or no manual intervention for the query processor to arrive at

the most efficient query plan.

To accomplish these goals in ASE 15, the Query Optimizer and

optimizer toolset have undergone numerous changes to accommodate

the demand for simplified detection and resolution of problems. A

6 | Chapter 1: Exploring the Sybase Galaxy

Page 38: New.features.guide.to.Sybase.ase.15

new set of optimizer diagnostic tools for the database administrator is

included in this release of ASE. Chapter 6 discusses how to enable

these new diagnostics. This chapter also defines the diagnostics and

offers insight into how the database administrator can use the new

diagnostics to detect query-level problems. A specific problem reso-

lution slant involving diagnostic examples with use of the new ASE

15 features, most specifically semantic partitions, is represented

through example.

It is also important to note that the name of the optimizer is now

often referred to as the “query processor” and equally as often as the

“Query Optimizer.” This is a departure from the normal reference to

the “Optimizer” as represented in prior versions of ASE.

Computed Columns

In accordance with the Sybase road map to provide operational excel-

lence within the database, ASE 15 introduces computed columns.

The implementation of computed columns affords the database

administrator the opportunity to ease development costs and provide

a foundation to offer increased data velocity.

The utilization of computed columns can reduce TCO by simpli-

fying the application development effort. The simplification is

accomplished by the placement of the computational burden on the

server as opposed to within the client layer. This enhancement is

especially applicable to the handling of unstructured data, such as

XML documents.

Chapter 7 provides the technical details on the implementation of

computed columns. The chapter outlines the syntax to implement

computed columns and includes several examples. Materialization

and the deterministic properties of computed columns are also

explained.

Functional Indexes

Sybase offers computed column indexes and function-based indexes

to maximize the scalability and performance capabilities of Sybase

ASE. These new index types allow Sybase ASE to maintain indexes

based on derived data. This enhancement further emphasizes the data

velocity initiatives incorporated into the release of ASE 15.

Chapter 1: Exploring the Sybase Galaxy | 7

Page 39: New.features.guide.to.Sybase.ase.15

Function-based indexes provide the database administrator an

additional tool to improve the performance of application logic while

reducing TCO. As a DBA, when faced with optimization scenarios

for queries issued from the application, previous solutions were to

modify the application code or upgrade the underlying hardware.

ASE 15 provides the ability to optimize application code with the

addition of function-based indexes to the database. These perfor-

mance-enhancing indexes can be applied without the impact of

application maintenance or hardware upgrades.

The underlying benefit of function-based indexes and computed

column indexes is the reduction of physical and logical I/O necessary

to satisfy queries. A reduction in physical and logical I/O allows ASE

to maintain performance as the storage requirements of databases

increase. This provides greater flexibility to the database administra-

tor for managing data explosion.

Chapter 8 provides details on how to implement functional

indexes. The chapter provides guidelines on when to implement these

new features as well as provides an analysis of the impact and

limitations.

Capturing Query Processing Metrics

As database administrators, we are often asked to scrutinize the per-

formance of queries. As an initial diagnostic step, many database

administrators examine the statistics I/O or statistics time output

associated with a query. While the statistics I/O and time output

provide initial information on a track to take to diagnose query per-

formance, basic information is often lacking (for example, baseline

performance for the specific query). The Query Processing Metrics

capture process provides a mechanism to maintain a query baseline

with minimal setup or maintenance effort.

In Chapter 9 we demonstrate the setup and value of the Query

Processing Metrics capture process as well as specifically identify

what and where information is maintained by the metrics capture

process.

This feature is in line with the Sybase goal to increasingly lower

the TCO of Sybase ASE. The Query Processing Metrics can supple-

ment or replace third-party tools designed to capture the SQL

executed against ASE installations.

8 | Chapter 1: Exploring the Sybase Galaxy

Page 40: New.features.guide.to.Sybase.ase.15

Plan Viewer

The introduction of the Plan Viewer in ASE 15 is aligned with

Sybase’s goal to ease the overall management needs of ASE. The

Plan Viewer is a graphical user interface (GUI) utility wrapped in the

interactive ISQL tool included with the ASE 15 open client product.

This new feature provides database administrators and developers a

tool to help visualize query plans for the purpose of quick and easy

identification of performance issues.

Chapter 10 provides instructions on how to set up and launch the

Plan Viewer. The chapter continues with a demonstration on how to

extract useful information from the Plan Viewer, and discusses the

information and concepts contained in the Plan Viewer’s output.

Sybase Software Asset Management

(SySAM) 2.0

In conjunction with the release of ASE 15, Sybase provides a new

version of the Sybase Software Asset Management (SySAM).

SySAM 2.0 provides a mechanism for Sybase to ensure customer

compliance with license agreements. For most customers, the ques-

tion is not “Should we monitor for compliance?” but “How do we

monitor for compliance?” SySAM 2.0 provides the capability of

monitoring and reporting license compliance, which is required for

ASE 15.

In Chapter 11, SySAM 2.0 is explored in detail. The components

of the Sybase Software Asset Management are explained, as well as

the various setup options, evaluation licenses, and grace periods.

Installation of ASE 15

The focus in Chapter 12 is the installation of ASE 15. This chapter

provides an overview of the server installation process, using three

distinctly different installation methods. With each method, server

installation is covered in an easy-to-follow step-by-step manner.

Chapter 1: Exploring the Sybase Galaxy | 9

Page 41: New.features.guide.to.Sybase.ase.15

Part II — Pre-15 ImprovementsWhy include pre-15 features in a book about ASE 15 new features?

As database administrators, the authors found very little information

about these features outside of the Sybase-provided manuals and

web-based seminars.

The pre-15 features that have been included in this book are flex-

ible, fascinating, and handy tools. Not only are these features useful,

but they set the groundwork for future expansion toward self-man-

agement and scalability in managing data explosion and very large

database (VLDB) growth.

Multiple tempdb Databases

The use of multiple tempdb databases was first made available in the

12.5.0.3 release of Sybase ASE. Until now, little has been published

on the use of multiple temporary databases. Chapter 13 offers insight

to organizations debating the use of multiple tempdb databases and

explains the steps that are needed for implementation.

Once the decision to implement multiple tempdb databases is

made, follow this chapter’s step-by-step example on how to imple-

ment the feature. Additionally, the chapter discusses the impact of

multiple tempdb databases on several of the ASE 15 new features.

MDA Tables

MDA tables are another ASE feature first available in release

12.5.0.3. While there are a few known publications on this feature,

we decided to incorporate the MDA tables into this 15 book to pro-

vide another resource on MDA table application.

In Chapter 14, the new MDA table and column additions are

identified. The chapter also walks through the basics of MDA table

setup, identifies the information collected by the MDA tables, sug-

gests how to manage the “statefulness” of MDA tables, and discusses

the impact of changing some of the MDA table’s configuration

parameters.

The MDA tables are another ASE feature in line with the Sybase

goal to increasingly lower the TCO of Sybase ASE. The MDA tables

10 | Chapter 1: Exploring the Sybase Galaxy

Page 42: New.features.guide.to.Sybase.ase.15

can supplement or replace tools designed to monitor ASE at the

server, database, and session levels.

Java and XML

Java and XML capabilities within the database are not new to ASE.

For several releases of ASE, Java and XML utilities have been

present.

Chapter 15 provides an overview of the implementation of Java

and XML in the database server as well as the impact of XML and

Java on the database. Additionally, an explanation of how to install

and enable Java in the database is presented. The chapter concludes

with the authors’ recommendations for appropriate use and imple-

mentation of these features.

The Appendices

Sample Certification Exam

Appendix A contains an 80-question sample Sybase ASE 15 certifi-

cation practice exam with answers. The intent of this appendix is to

enhance the learning experience of the Sybase ASE 15 material pre-

sented in this book. Additionally, the sample exam provides

supplemental preparation for DBA certification for the Sybase ASE

15 product. Approximately 60% of the questions are based purely on

ASE 15 material, while 40% of the questions are based on common

Sybase ASE knowledge that would apply to ASE 15 and earlier ver-

sions. The general Sybase ASE material is based on actual test

questions from previous tests.

Use Cases

All ASE 15 material covered in this book is put to practical use

through the business cases presented in Appendix B. Through practi-

cal examples, the use cases demonstrate how Sybase ASE 15 can

assist businesses with lowering TCO. Additionally, the examples

highlight the many advantages provided by the ASE 15 “Galaxy”

features.

Chapter 1: Exploring the Sybase Galaxy | 11

Page 43: New.features.guide.to.Sybase.ase.15

3, 2, 1, Contact!It is a new and exciting time in the universe of database technology.

The use of the database will broaden to handle the new challenges of

not only data management, but information management. The future

of information management is here today. So sit back, buckle up, and

enjoy the ride as we explore the outer reaches of the “Galaxy” release

known as Sybase ASE 15.

12 | Chapter 1: Exploring the Sybase Galaxy

Page 44: New.features.guide.to.Sybase.ase.15

Chapter 2

System MaintenanceImprovements

For a database administrator, system maintenance can be one way to

separate the wheat from the chaff. As a rule of thumb, a good data-

base administrator should spend less time fixing server issues and

more time preventing them. With the release of ASE 15, Sybase has

added new features to give the database administrator a helping hand.

The new features can be implemented to not only help the database

administrator, but to perform certain tasks for the database

administrator.

The new features will assist database administrators in preparing

their systems for next-generation platforms. These features will also

enable ASE to handle very large, data-intensive environments. This

chapter focuses on the system maintenance improvements designed

to help all database administrators, especially those pertaining to

VLDB installations. The chapter also reviews the evolution of ASE

15’s new features as they progressed from ASE 12.0 through 12.5.x.

Recent Pre-ASE 15 ImprovementsAs mentioned in Chapter 1, Sybase has taken steps toward automat-

ing and streamlining system administration and server maintenance

tasks. Many of the new features in ASE 15 are available with help

from the building blocks Sybase placed within the ASE 12.5.x point

releases. Listing these pre-ASE 15 enhancements, we have: multiple

Chapter 2: System Maintenance Improvements | 13

Page 45: New.features.guide.to.Sybase.ase.15

tempdb, native data encryption, automatic database expansion, and

the Job Scheduler feature.

Multiple tempdb

The multiple tempdb capability was introduced in ASE 12.5.0.3.

This feature allows the database administrator to create special

“user-defined” temporary databases and to distribute tempdb activity

across multiple temporary databases. The enhancement helps to elim-

inate a long-standing point of contention within the ASE architecture.

Constructed properly, the multiple tempdb feature can allow for a

“back door” into ASE in the event of a full system tempdb. A prudent

database administrator can create a privileged login bound to a

user-defined tempdb. From this user-defined tempdb, the privileged

user can perform maintenance on the system tempdb in the event the

system tempdb experiences a log full problem. The setup of this

“back door” is relatively simple, and can save the database adminis-

trator from an unplanned recycle of ASE.

For further information on the implementation of multiple

tempdb in ASE, please see Chapter 13, as well as the Sybase Adap-

tive Server System Administration Guide.

Native Data Encryption/Security Enhancements

Sybase plans to integrate the native data encryption feature into the

ASE 15 product line in the 15.0.1 interim release. For now, native

data encryption exists for ASE 12.5.3a. For detailed information on

this subject, please see Sybase’s “New Features Adaptive Server

Enterprise 12.5.3a” document.

The following items are a few of the highlights of the native data

encryption feature:

� Database administrators or security officers (sso_role) have the

ability to create encryption keys at the database level using the

Advanced Encryption Standard (AES) algorithm.

� Multiple encryption keys are permitted within a single ASE

database.

� Encrypted data is stored on disk or in memory as encrypted data

and not plain text. The encryption can be verified with the dbcc

page command:

14 | Chapter 2: System Maintenance Improvements

Page 46: New.features.guide.to.Sybase.ase.15

dbcc page (dbid, pageno,[printopt [,cache [,logical [,cachename]]]])

dbcc traceon(3604)

go

dbcc page (5,1282,1) -- specify 1 as the third option to dump the

contents of the page.

go

� Encrypted data, as opposed to plain text, is stored in the transac-

tion log. This can be verified with the dbcc log command:

dbcc log (dbid, objid, pageno, rowno, nrecs, type, printopt)

dbcc traceon(3604)

go

dbcc log (6, 553769030)

go

� No external programming logic is necessary to decrypt encrypted

data.

Recommendation: To enhance the level of security with ASE 15, sepa-

rate the sso_role from the sa_role in ASE. Grant these roles to two or

more different privileged users. Additionally, it is recommended to lock

the sa login for ASE as this login has both the sa_role and sso_role upon

installation of ASE.

Automatic Database Expansion

Automatic database expansion was first introduced in ASE 12.5.1.

This feature is used to automatically increase database and device

sizes. In order to accomplish the resize, the auto-expand feature uses

administrator-defined thresholds to determine when the expansion

will occur.

The management of automatic database expansion can be

accomplished with the sp_dbextend system procedure. Prior to

implementing automatic database expansion, the sp_dbextend sys-

tem procedure will need to be installed by the installdbextend script.

According to Sybase’s “What’s New in ASE” guide for ASE 12.5.1,

new rows will be added to the sysattributes table in the master data-

base in order to manage and document the expansion thresholds set

up by the database administrator. Additionally, 93 system procedures

are added to the sybsystemprocs database by the installdbextend

script.

Chapter 2: System Maintenance Improvements | 15

Page 47: New.features.guide.to.Sybase.ase.15

The Basics

The sp_dbextend system procedure is the tool used to set up and

maintain automatic database expansion. The procedure can be used to

provide a listing of the syntax options, which are included here for

reference:

sp_dbextend "help"

go

Usage: sp_dbextend [arguments ...]

Usage: sp_dbextend help [,<command type>]

Usage: sp_dbextend 'set', 'threshold', @dbname, @segmentname, @freespace

Usage: sp_dbextend 'set', 'database', @dbname, @segmentname {[,@growby]

[,@maxsize]}

Usage: sp_dbextend 'set', 'device', @devicename {[,@growby] [,@maxsize]}

Usage: sp_dbextend 'clear', 'threshold', @dbname, @segmentname

[,@freespace]

Usage: sp_dbextend 'clear', 'database' [,@dbname [,@segmentname]]

Usage: sp_dbextend 'clear', 'device' [,@devicename]

Usage: sp_dbextend 'modify', 'database', @dbname, @segmentname, {'growby' |

'maxsize'}, @newvalue

Usage: sp_dbextend 'modify', 'device', @devicename, {'growby' | 'maxsize'},

@newvalue

Usage: sp_dbextend {'list' | 'listfull'} [,'database' [,@dbname

[,@segmentname [,@ORDER_BY_clause]]]]

Usage: sp_dbextend {'list' | 'listfull'} [,'device' [,@devicename

[,@ORDER_BY_clause]]]

Usage: sp_dbextend 'check', 'database' [,@dbname [,@segmentname]]

Usage: sp_dbextend {'simulate' | 'execute', @dbname, @segmentname

[,@iterations]

Usage: sp_dbextend 'trace', {'on' | 'off'}

Usage: sp_dbextend 'reload [defaults]'

Usage: sp_dbextend {'enable' | 'disable'}, 'database' [,@dbname

[,@segmentname]]

Usage: sp_dbextend 'who' [,'<spid>' | 'block' | 'all']

The Sybase reference manual provides a detailed explanation of the

commands and parameters for the sp_dbextend procedure shown in

the above example. To allow automatic database expansion, the data-

base administrator must accomplish two main tasks with the

sp_dbextend procedure: Set up the sp_dbextend action, and define

the sp_dbextend threshold. The setup of the action and definition of

the threshold is demonstrated in the following example, where the

logsegment of database Properties is set up to extend by 50 MB

16 | Chapter 2: System Maintenance Improvements

Page 48: New.features.guide.to.Sybase.ase.15

(the action) when the logsegment reaches 5 MB (the threshold) of

remaining freespace.

Example:

sp_dbextend 'set', 'database', properties, "logsegment", '50m'

sp_dbextend 'set', 'thresh', properties, "logsegment", '5m'

When the threshold of 5 MB is reached, the database’s logsegment is

extended by 50 MB. Messages similar to the following will be sent to

ASE’s errorlog:

00:00000:00026:2005/09/12 21:59:35.09 server background task message:

Threshold action procedure 'sp_dbxt_extend_db' fired in db 'properties'

on segment 'logsegment'. Space left: 2560 logical pages ('5M').

00:00000:00026:2005/09/12 21:59:35.11 server background task message:

ALTER DATABASE properties log on logspace1 = '50.0M' -- Segment:

logsegment

00:00000:00026:2005/09/12 21:59:35.13 server Extending database by 25600

pages (50.0 megabytes) on disk logspace1.

The above example demonstrates where the database is mapped to a

device on which the logsegment has space available to expand. The

output from the errorlog indicates the device logspace1 had free

space remaining on which to expand.

Recommendation: Use the automatic database expansion feature of

ASE with caution. Automatically expanding a database is not always the

best solution for dealing with space utilization.

As opposed to automatic expansion of a database, often data may

need to be archived or deleted from a system, or perhaps a long-run-

ning transaction needs to be flushed from ASE. The enablement of

automatic expansion can allow for databases to become inconsistent

in size between production, development, and test environments.

Many database administrators support environments where produc-

tion data is cascaded to development or test servers. Automatic

expansion of the production database, for example, would create a

waterfall effect of database resizing for these servers where data is

duplicated from production to development or test regions via a

dump and load process. Additionally, with a larger database, more

time is needed to perform restore operations since all pages allocated

to a database are recovered, regardless of the presence of data within

Chapter 2: System Maintenance Improvements | 17

Page 49: New.features.guide.to.Sybase.ase.15

the disk allocations. Finally, for users of the dbcc checkstorage con-

sistency checker, alteration of the dbccdb configuration may be

necessary for databases where automatic expansion has occurred.

Job Scheduler

As a precursor to some of the automatic administration tasks, Job

Scheduler was introduced in ASE 12.5.1. In introducing this feature,

Sybase took the first steps toward offering automatic administration

from within a Sybase product.

Job Scheduler allows an administrator to create and schedule

jobs, as well as share jobs and schedules. This means one database

administrator can create a job and other database administrators can

then schedule and run that job on another server. Job Scheduler jobs

can be created from scratch using the command line or GUI, or from

a SQL batch file. Jobs can also be created from a predefined

template.

Job Scheduler can also be used as a tool for messaging. It cap-

tures the results and output of jobs and records that information in log

tables. This data can then be queried and used for creating meaning-

ful messages. In addition, Job Scheduler keeps a history of scheduled

jobs. Job Scheduler is self-monitoring and removes outdated, unnec-

essary history records. By doing this, a limit can be kept on the size

of the history table. In addition, many of the new features in ASE 15

have been integrated with Job Scheduler.

Basic Components

Job Scheduler has two main functional components: the Job Sched-

uler Task (JS Task) and the Job Scheduler Agent (JS Agent). The JS

Task manages the schedules and notifies the JS Agent to execute a

job. The JS Agent then carries out job execution.

Installation of Job Scheduler

The Job Scheduler can be installed in conjunction with the server

executed through the installation GUI. Additionally, Job Scheduler

can be installed manually after the ASE server is built. Please see

Chapter 12 for installation of the Job Scheduler through the Sybase

installation GUI. Additionally, the installation methods and creation

of jobs are well outlined in Sybase’s Job Scheduler User’s Guide.

18 | Chapter 2: System Maintenance Improvements

Page 50: New.features.guide.to.Sybase.ase.15

With the Job Scheduler, it is important to note the scheduler is

backwardly compatible between versions of ASE, to a degree. For

example, basic maintenance jobs such as dbcc checkstorage

launched against a target database by the Job Scheduler can be

expected to operate against a 12.5.x ASE server when launched from

a JS within ASE 15. Some of the newer job templates will not work

against a 12.5.x server, such as the job template to automatically

update statistics.

ASE 15 Improvements

Row Locked System Catalogs

Yes, that’s right. You read correctly. It does say “row locked system

catalogs.” At last, the feature that many Sybase database administra-

tors have been waiting for. Key tables in all databases will be row

locked. However, it is important to note this feature is only partially

implemented in the ASE 15 GA release. For the ASE 15 GA release,

modifications to any system catalog will continue to use exclusive

table locks, similar to the behavior exhibited by pre-ASE 15 systems.

This was a deliberate multi-phased release. Since the catalog and

catalog index changes were required to support DOL locking, the

DOL locking schema changes are implemented as part of the 15

upgrade process. This is the “preparatory” phase for the full imple-

mentation of row locked catalogs in which the DOL locking would

be enabled in the code. Two phases were decided upon as catalog

changes were already needed to support partitioning (avoids having a

catalog change/mini-upgrade when RLC is finally implemented),

allowing it to be implemented in a simple EBF vs. IR or upgrade.

Once this handy new feature is fully implemented, it will add

grace and ease to server maintenance and administration tasks by

allowing similar operations, such as DDL execution, to occur simul-

taneously. By allowing increased concurrency to system catalogs,

maintenance operations will no longer have contention on the system

catalogs.

Another advantage of row locked system catalogs is the reduc-

tion of lock contention for DDL operations, allowing higher

throughput for applications, especially those applications where

Chapter 2: System Maintenance Improvements | 19

Page 51: New.features.guide.to.Sybase.ase.15

tempdb usage is high. This advantage is accomplished by easing and

even eliminating lock contention in the temporary database(s).

Blocking and deadlocks due to stored procedure renormalization are

largely eliminated.

Less contention means less waiting, which in turn means

improved application performance.

Tip: Multiple tempdbs are still a good idea to relieve log and cache

contention.

Update Statistics

The update statistics command for ASE 15 is enhanced twofold.

First, the statistics can be updated at the partition level in ASE 15.

Additionally, Sybase introduces the datachange function, which

allows the database administrator to measure the amount of change to

the data underlying the collected statistics at the table, index, column,

and partition level. Based on the values returned by the datachange

function, database administrators can make update statistics opera-

tions optional, as the datachange function provides a basis to

measure the amount of changed data at a specified level. Let’s begin

by exploring the enhanced syntax for update statistics in ASE 15.

Updates to Partition Statistics

Prior to ASE 15, update statistics operations were only possible on

the columns or indexes of a table in their entirety. Often, the mainte-

nance windows are too minimal to perform update statistics on

tables. Some database administrators were forced to reduce the fre-

quency of update statistics operations, or possibly compromise the

accuracy of update statistics operations with statistics sampling. With

the introduction of semantic partitions in ASE 15, database adminis-

trators can now update statistics at the partition level, offering an

opportunity to split the update statistics operations for large objects

between multiple maintenance windows.

20 | Chapter 2: System Maintenance Improvements

Page 52: New.features.guide.to.Sybase.ase.15

Syntax:

update statistics table_name [[partition data_partition_name]

[(column_list)] | index_name [partition index_partition_name]]

[using step values] [with consumers = consumers]

update index statistics table_name [[partition data_partition_name]

|[index_name [partition index_partition_name]]] [using step

values] [with consumers = consumers]

update all statistics table_name [partition data_partition_name]

update table statistics table_name [partition data_partition_name]

delete [shared] statistics table_name [partition

data_partition_name] [(column_name [,column_name] ...)]

It is important to note the old syntax to perform statistics updates for

partitions of update partition statistics is a deprecated command in

ASE 15. The above syntax indicates updates to partition-level statis-

tics is accomplished with the update statistics table command. Prior

to ASE 15, the rationale behind update partition statistics was to get a

better picture of partition skew for parallel query optimization. In

other words, the more pronounced the skew, the less likely the

optimizer would choose a parallel query plan. With semantic parti-

tioning, data skew is virtually guaranteed as the data is partitioned by

value and the values are not likely to be evenly distributed. However,

this effectively deprecates “partition skew” as an issue as partition

elimination, directed joins, and other parallel query optimization

techniques now rely on the data semantics vs. the skew for optimiza-

tion decisions. Round-robin partitions, however, likely are still

treated the same.

Examples:

This example updates statistics on the data partition part1. For parti-

tion-level update statistics, Adaptive Server creates histograms for

each major attribute of the local indexes for the specified partition

and creates densities for the composite attributes. Adaptive Server

also creates histograms for all non-indexed columns.

update statistics ckt_table partition part1

This example updates all statistics on the data partition part1:

update all statistics ckt_table partition part1

Chapter 2: System Maintenance Improvements | 21

Page 53: New.features.guide.to.Sybase.ase.15

This example regenerates systabstats data for data partition part2 of

table user_table:

update table statistics user_table partition part2

Automatic Update Statistics

Automated statistics updates can be accomplished with the combina-

tion of different scheduling techniques, and even made optional with

the utilization of the datachange feature. As discussed in the earlier

section covering the Job Scheduler, the control of update statistics

jobs can be managed from within ASE.

The goal of Automatic Update Statistics is to automatically

determine when to run update statistics and minimize the impact on

ASE performance. Automatic Update Statistics allows users to deter-

mine the objects, schedules, and datachange thresholds to automate

the process. In other words, it allows the database administrator to

choose the optimum time to run update statistics, and only run update

statistics as required.

Implemented properly, the addition of the datachange function

can minimize or eliminate one of the database administrator’s largest

maintenance windows. With ASE 15, it is possible for the database

administrator to run update stats on systems where this maintenance

effort was otherwise infeasible due to time constraints, especially

when combining the datachange feature with the ability to update

stats at the partition level, and the statistics sampling concept intro-

duced in ASE 12.5.

Datachange

The datachange function is the key to identifying whether update

statistics operations on a table, index, partition, or column is neces-

sary. The datachange function returns a percentage value to indicate

how much the data has changed within a table, partition, index, or

column. A value of 0% indicates no changes to the object are mea-

sured by the datachange function. As the data changes, the value

returned will increase.

Syntax:

select datachange(object_name, partition_name, column_name)

22 | Chapter 2: System Maintenance Improvements

Page 54: New.features.guide.to.Sybase.ase.15

Examples:

Measure the data changed at the table level:

select datachange("authors", null, null)

Measure the data change to the identification column of the authors

table:

select datachange("authors", null, "identification")

Measure the data change to the authors_part2 partition of the authors

table:

select datachange("authors", "authors_part2", null)

Measure the data change to the identification column contained in the

authors_part4 partition of the authors table:

select datachange("authors", "authors_part4", "identification")

It is possible for the datachange function to return values greater

than 100%. These values are possible due to the way the datachange

function measures updates. Updates are measured as a delete and an

insert against the measured object. Thus, a count of 2 is contributed

to the datachange counter for updates. For inserts and deletes, the

count is incremented by 1 for each single row affected.

Take a look at the following example, which demonstrates how

the datachange function can report values of over 100%.

First, consider how the datachange value is calculated for updates

using the table data described below.

Information about the identification table:

select identification, "rowcount" = count(*)

from authors

group by identification

identification rowcount

Carrie Taylor 40000

Steve Bradley 30000

Naresh Adurty 20000

Jagan Reddy 10000

Brian Taylor 10000

Chapter 2: System Maintenance Improvements | 23

Page 55: New.features.guide.to.Sybase.ase.15

Step 1: Update statistics on the identification column of the authors

table:

update statistics authors(identification)

go

Step 2: Verify the datachange function reports no data changed to

the identification column of the authors table. The datachange value

should report 0.0 since no data has changed in the identification col-

umn since update statistics on this column last took place:

select datachange("authors", null, "identification")

Output:

0.0

Step 3: Update 63% of the data in the identification column of the

authors table:

update authors

set identification = "DataChange Demo"

where identification = "Carrie Taylor"

update authors

set identification = "Testing"

where identification = "Steve Bradley"

Step 4: Run the datachange function against the identification col-

umn of the authors table:

select datachange("authors", null, "identification")

Output:

127.272727

Note that 63% of the data changed based on the contents of the table;

however, the datachange function is reporting that 127% of the data

in the identification column has changed. Since this operation was an

update, the data changes are counted twice, therefore the datachange

value is computed by ASE as follows:

70,000 updates = 70,000 inserts + 70,000 deletes

= 140,000 datachange counts

= 140,000 datachange counts/110,000 rows in table

= 1.272727, or 127.27%

24 | Chapter 2: System Maintenance Improvements

Page 56: New.features.guide.to.Sybase.ase.15

Note: For this example with datachange, a bulk update is employed,

which will effectively be processed as a deferred update by ASE. Had the

example employed a row by row update, perhaps through a cursor, the

doubling of the counted changes shown in this example may not have

occurred.

Now let’s consider how the datachange value is calculated for deletes

using the same table data described above.

Note: Data reset to the original data, as displayed on page 23.

Step 1: Update the statistics on the identification column of the

authors table:

update statistics authors(identification)

go

Step 2: Verify the datachange function reports no data changed to

the identification column of the authors table. The datachange value

should report 0.0 since no data has changed in the identification col-

umn since update statistics on this column last took place:

select datachange("authors", null, "identification")

Output:

0.0

Step 3: Delete 30,000 rows from the authors table, reducing the

rowcount from 110,000 to 80,000:

delete authors

where identification = "Steve Bradley"

go

Step 4: Run the datachange function against the identification col-

umn of the authors table:

select datachange("authors", null, "identification")

go

Output:

37.5

Note that according to the datachange function, 37.5% of the identi-

fication column has changed since the statistics for this column were

last updated. So why is datachange reporting 37.5% of the column’s

data changed when only 27% of the table’s data was deleted?

Chapter 2: System Maintenance Improvements | 25

Page 57: New.features.guide.to.Sybase.ase.15

Because ASE measures the change counts against the current

rowcount in the table, and not the rowcount prior to the delete opera-

tion. The same measurement principle applies for insert operations as

demonstrated in the next example.

Evaluate the datachange calculation:

110,000 = rowcount prior to delete

30,000 = number of rows deleted

80,000 = number of rows in the table after the delete

30,000/80,000 = 37.5% of the data is changed

Finally, let’s consider how the datachange value is calculated for

inserts using the same table data described above.

Note: Data reset to the original data, as displayed on page 23.

Step 1: Update the statistics on the identification column of the

authors table:

update statistics authors(identification)

go

Step 2: Verify the datachange function reports no data changed to

the identification column of the authors table. The datachange value

should report 0.0 since no data has changed in the identification col-

umn since update statistics on this column last took place:

select datachange("authors", null, "identification")

Output:

0.0

Step 3: Insert 30,000 rows into the authors table, increasing the

rowcount from 110,000 to 140,000.

Step 4: Run the datachange function against the identification col-

umn of the authors table:

select datachange("authors", null, "identification")

go

Output:

21.428571

26 | Chapter 2: System Maintenance Improvements

Page 58: New.features.guide.to.Sybase.ase.15

Evaluate the datachange calculation:

110,000 = rowcount prior to insert

30,000 = rows inserted

140,000 = number of rows in the table after the delete

30,000/140,000 = 21.42% of the data is changed

Tip: Two conclusions can be drawn from this example. First, the

datachange function evaluates the percentage of data changed based

upon the current rowcount of the table. Second, deletes have a greater

impact than inserts on the datachange calculation as demonstrated in

this example.

Why Use datachange?

The datachange function measures the amount of data that has

changed at the column, table, or partition level where statistics are

maintained. The value reported by datachange is useful for determin-

ing if additional runs of the update statistics command are necessary

to synchronize data statistics with the actual data. In other words,

scrutiny of the datachange function can determine the need to run the

update statistics process at the table, column, index, or partition level.

Tip: Use the datachange function prior to the launch of any update sta-

tistics command to determine whether the update statistics command is

necessary. Use of the datachange function provides an opportunity for

the database administrator to minimize or eliminate one of the tradition-

ally largest maintenance windows for Sybase ASE databases.

Evaluation of the datachange column prior to the issuance of any

update statistics command is presented with the following:

declare @datachange numeric(5,1)

select @datachange = datachange("authors", null, "identification")

if (@datachange > 20.0)

begin

select "updating statistics of identification column for table

authors"

update statistics authors(identification)

end

else if (@datachange <= 20.0)

begin

Chapter 2: System Maintenance Improvements | 27

Page 59: New.features.guide.to.Sybase.ase.15

select "datachange: " + convert(varchar(10),@datachange) + " for

identification column of authors table, update stats skipped."

end

Output:

datachange: 1.4% for identification column of authors table, update stats

skipped.

Datachange, Semantic Partitions, and

Maintenance Schedules

The datachange function in combination with semantic partitions

will provide database administrators the ability to minimize the main-

tenance windows for large tables on VLDB systems. In some cases,

update statistics may become possible on VLDB systems where they

otherwise were infeasible to perform based upon the size of the main-

tenance windows in comparison to the allotted time to perform

update statistics operations on non-semantically partitioned tables.

Given a very large table, semantic partitions provide a basis to divide

the update statistics operations into multiple small batches that con-

sume far less time than update statistics at the table level.

Consider the following scenario:

For a 120 GB database, update statistics at the table level con-

sumes 12 hours on a pre-ASE 15 system. Of this time, the majority is

spent on update statistics operations of three very large tables, each

with approximately 30 GB of data.

Here is the ASE 15 solution:

The largest tables are semantically partitioned by range, list, or

hash each into five partitions, resulting in more manageable portions

of these tables at approximately 6 GB in size.

As opposed to a 12-hour window on one day, perform update

statistics operations on a subset of each table five days a week, but

employ the datachange function to determine if update statistics are

valid and require an update.

Additionally, perform dbccs and reorg operations at the partition

level on the largest tables in order to spread the traditional large

maintenance windows for these tables to several shorter maintenance

windows. The following tables show a comparison of a traditional

maintenance window with that of an ASE 15 maintenance window

where semantic partitions and maintenance are used.

28 | Chapter 2: System Maintenance Improvements

Page 60: New.features.guide.to.Sybase.ase.15

Traditional Maintenance Schedule

Mon Tue Wed Thur Fri Sat Sun

12:00 AM Backups Backups Backups Backups Backups Backups Backups

2:00 AM Index

Reorgs

DBCCs

4:00 AM

6:00 AM

8:00 AM

10:00 AM Update

Statistics

12:00 PM

2:00 PM

4:00 PM

6:00 PM

8:00 PM

10:00 PM

12:00 AM

Maintenance Schedule with Semantic Partitions

Mon Tue Wed Thur Fri Sat Sun

12:00 AM Backups Backups Backups Backups Backups Backups Backups

2:00 AM Reorg DBCCs Reorg DBCCs Reorg Reorg DBCCs

4:00 AM DBCCs Upd.

Stats

Upd.

Stats

Upd.

Stats

Upd.

Stats

Upd.

Stats

6:00 AM

8:00 AM

10:00 AM

12:00 PM

2:00 PM

4:00 PM

6:00 PM

8:00 PM

10:00 PM

12:00 AM

Chapter 2: System Maintenance Improvements | 29

Page 61: New.features.guide.to.Sybase.ase.15

Alternate Maintenance Schedule with Semantic Partitions

The following schedule is to be repeated weekly, with the following

rotations:

� Week 1: Reorgs, DBCCs, Upd Stats on first quarter of parti-

tioned tables

� Week 2: Reorgs, DBCCs, Upd Stats on second quarter of parti-

tioned tables

� Week 3: Reorgs, DBCCs, Upd Stats on third quarter of parti-

tioned tables

� Week 4: Reorgs, DBCCs, Upd Stats on fourth quarter of parti-

tioned tables

Mon Tue Wed Thur Fri Sat Sun

12:00 AM Backups Backups Backups Backups Backups Backups Backups

2:00 AM Reorgs DBCCs

4:00 AM Upd.

Stats

6:00 AM

8:00 AM

10:00 AM

12:00 PM

2:00 PM

4:00 PM

6:00 PM

8:00 PM

10:00 PM

12:00 AM

Tip: Very often, only a portion of the data within various tables in an

ASE database changes. Traditionally, update statistics could only target

the table as a whole to match statistics to the actual table data. With ASE

15’s semantic partitions and the datachange function, database adminis-

trators should take advantage of the ability to focus update statistics

operations on only those portions of tables, indexes, or columns where

data has changed.

30 | Chapter 2: System Maintenance Improvements

Page 62: New.features.guide.to.Sybase.ase.15

Local Indexes

With local indexes, database administrators can create indexes that

are local to specific partitions of a table. Local indexes can be clus-

tered or nonclustered. Additionally, local indexes can be on the

classic round-robin style partition as well as on semantic partitions.

Further, it is possible to mix partitioned (local) and unpartitioned

(global) indexes on partitioned tables. As a rule, an unpartitioned

table can have only unpartitioned indexes, so this topic is only rele-

vant to partitioned tables in ASE 15 where the number of partitions is

greater than one.

This chapter focuses on the basics of local indexes. More

detailed information on local indexes can be found in Chapter 3.

Benefits

The main benefit of local indexes is the ability to perform partition-

specific maintenance at the local index level. With local indexes, it is

possible to localize reorg, dbcc, and update statistics operations to a

specific partition. This aspect of local indexes allows for maintenance

to either be minimized or spread across multiple smaller maintenance

windows as opposed to one large index maintenance window.

Syntax:

The additional syntax for create index in ASE 15 adds the following

clause to the end of the create index statement:

index_partition_clause:

[local index] [partition_name [on segment_name]

[,partition_name [on segment_name] ...]]

Examples:

Create local, nonclustered index on the authors table, on the named

partitions of batch1, batch2, batch3, and batch4:

create nonclustered index authors_idx4

on authors(effectiveDate) local index batch1,batch2,batch3,batch4

Create local, nonclustered index on the authors table and accept the

system-generated index partition names:

create nonclustered index authors_idx2

on authors(identification) local index

Chapter 2: System Maintenance Improvements | 31

Page 63: New.features.guide.to.Sybase.ase.15

Create local, nonclustered index on the authors table, provide the

partition names for the third index partition, and accept the system-

generated index partition for the first, second, and fourth index parti-

tion for a four-way partition table:

create nonclustered index authors_idx6

on authors(identification) local index batch3

Create local clustered index on the authors table and accept the sys-

tem-generated index partition names:

create clustered index authors_clustered_idx

on authors(identification) local index

Note: It is not possible to have more than one clustered index on a

table. It is also not possible to have a clustered global index and a clus-

tered local index on the same table.

sp_helpindex

The sp_helpindex system procedure in ASE 15 will indicate which

indexes are global and which are local for a table. The output will

also indicate the index partition names, which can be useful as some

of the partition-specific dbcc operations, such as dbcc indexalloc and

dbcc checkindex, require the index partition ID as an input parame-

ter. From the output of sp_helpindex, we know the authors table has

three indexes. Of these three indexes, two are local indexes. The local

index partition IDs are displayed as the last portion of output from

this system procedure.

Example:

sp_helpindex authors

go

Output:

index_name index_keys index_description index_max_rows_per_page

index_fillfactor index_reservepagegap index_created

index_local

authors_nc1 identification nonclustered 0 0 0

9/30/2005 10:34:10.353 PM Global Index

32 | Chapter 2: System Maintenance Improvements

Page 64: New.features.guide.to.Sybase.ase.15

authors_idx2 identification nonclustered 0 0 0

10/1/2005 10:03:00.463 PM Local Index

authors_idx3 code nonclustered 0 0 0

10/1/2005 10:03:24.463 PM Local Index

index_ptn_name index_ptn_seg

authors_nc1_649769372 default

authors_idx2_681769486 default

authors_idx2_697769543 default

authors_idx2_713769600 default

authors_idx2_729769657 default

authors_idx3_761769771 default

authors_idx3_777769828 default

authors_idx3_793769885 default

authors_idx3_809769942 default

(return status = 0)

Partition-level Utilities

Partition-level utilities allow the database administrator to periodi-

cally perform maintenance tasks at the partition, or sub-object, level.

Maintenance can be directed toward a single partition or cycle

through all partitions in a partitioned table. Partition maintenance has

also been integrated with the Job Scheduler. Semantic partitioning

can improve performance and help manage data as discussed in this

section. In particular, semantic partitions can help manage large

tables and indexes by dividing them into smaller, more manageable

pieces. Additionally, semantic partitions are the building blocks for

parallel processing, which can significantly improve performance.

In this section, the partition-level utilities and configuration

parameters are discussed in the following order:

� Partition configuration parameters

� Utility benefits from semantic partitions

� Partition-specific database consistency checks (dbccs)

� Reorg partitions

� Changes to the bcp utility

� Truncate partitions

Chapter 2: System Maintenance Improvements | 33

Page 65: New.features.guide.to.Sybase.ase.15

Partition Configuration Parameters

In order to support the new partition-level utilities, two new configu-

ration parameters have been added to the server configuration. These

parameters are:

� number of open partitions — Sets the number of open partitions

for ASE as a whole. The default is 500. To find the number of

open partitions, use the sp_countmetadata system procedure as

follows:

Open partitions on ASE:

1> sp_countmetadata "open partitions"

2> go

There are 239 user partitions in all database(s), requiring 505

Kbytes of memory. The 'open partitions' configuration parameter is

currently set to 500.

(return status = 0)

Open partitions within specific database:

1> sp_countmetadata "open partitions", "events"

2> go

There are 9 user partitions in events database(s), requiring 505

Kbytes of memory. The 'open partitions' configuration parameter is

currently set to 500.

(return status = 0)

Note: The section of Chapter 3 called “Semantic Partitions” discusses

the fact that all tables in ASE are now considered to be partitioned. By

default, all tables are partition type round-robin. The sp_countmetadata

procedure will return a count on user-created partitions of all types; how-

ever, round-robin partitions will only be counted by sp_countmetadata

where number of partitions is greater than one.

� partition spinlock ratio — Used to set the ratio of spinlocks to par-

tition caches. This configuration option is only relevant to ASE

when the number of online engines is greater than one.

34 | Chapter 2: System Maintenance Improvements

Page 66: New.features.guide.to.Sybase.ase.15

Utility Benefits from Semantic Partitions

Adaptive Server 15 supports horizontal partitioning, wherein a col-

lection of table rows are distributed among multiple disk devices. In

addition, ASE 15 supports semantic partitioning, wherein value-

based partitioning schemes, such as range and hash, partition data

according to value.

Significant benefits of semantic partitions include:

� Improved scalability

� Improved performance — Concurrent multiple I/O on different

partitions, and multiple threads on multiple CPUs working con-

currently on multiple partitions

� Faster response time

� Partition transparency

� VLDB support — Concurrent scanning of multiple partitions of

very large tables

� Range partitioning to manage historical data, hash partitioning,

and round-robin partitioning

In addition to the aforementioned benefits of semantic partitions, sys-

tem maintenance on ASE 15 is improved. The improvements are a

result of partition-specific maintenance capabilities with the enhance-

ments of several utilities to perform system administration and

maintenance tasks at the partition level.

Partition-specific Database Consistency Checks (dbccs)

For ASE 15, a portion of the database consistency checks are

enhanced with the ability to check consistency at the partition level

for semantically partitioned tables. The partition-enabled dbcc com-

mands are dbcc checktable, dbcc checkindex, dbcc indexalloc, and

dbcc tablealloc.

dbcc checktable Syntax

dbcc checktable({table_name | table_id} [,skip_ncindex | "fix_spacebits"

[,"partition_name" | partition_id]])

For dbcc checktable, a specific partition name or partition ID can be

supplied. The partition name or ID limits checktable’s analysis to the

data within a specific partition, and the local indexes for the specified

partition.

Chapter 2: System Maintenance Improvements | 35

Page 67: New.features.guide.to.Sybase.ase.15

Example:

dbcc checktable(authors, null, batch1)

Output:

Checking partition 'batch1' (partition ID 569769087) of table 'authors'.

The logical page size of this table is 2048 bytes.

The total number of data pages in partition 'batch1' (partition ID

569769087) is 981.

Partition 'batch1' (partition ID 569769087) has 55000 data rows.

DBCC execution completed. If DBCC printed error messages, contact a user

with System Administrator (SA) role.

dbcc checkindex Syntax

dbcc checkindex({table_name | table_id}, index_id, bottom_up

[,partition_name | partition_id]])

The dbcc checkindex command is similar to dbcc checktable, except

that checks are limited to the named index of the table. Additionally,

with dbcc checkindex it is possible to perform the consistency checks

for DOL tables from the leaf level upward toward the root level with

the bottom_up parameter.

Example:

-- checkindex executed on user-named index partition:

dbcc checkindex(authors, 5, bottom_up, batch1)

Output:

Checking partition 'batch1' (partition ID 841770056) of table 'authors'.

The logical page size of this table is 2048 bytes.

Table has 55000 data rows.

Index has 55000 leaf rids.

The total number of data pages in this table is 1002.

DBCC execution completed. If DBCC printed error messages, contact a user

with System Administrator (SA) role.

Example:

-- checkindex executed on system named index partition:

dbcc checkindex(authors, 3, null, authors_idx2_729769657)

Output:

Checking partition 'authors_idx2_729769657' (partition ID 729769657) of

table 'authors'. The logical page size of this table is 2048 bytes.

DBCC execution completed. If DBCC printed error messages, contact a user

with System Administrator (SA) role.

36 | Chapter 2: System Maintenance Improvements

Page 68: New.features.guide.to.Sybase.ase.15

dbcc indexalloc Syntax

The dbcc indexalloc command can perform analysis on a specific

index partition. The first argument of the dbcc indexalloc command

accepts partition_id as an input parameter. An example of dbcc

indexalloc is executed against a single partition index where

index_id = 3.

dbcc indexalloc(object_name | object_id | partition_id, index_id

[,{full | optimized | fast | null} [,fix | nofix]])

Example:

dbcc indexalloc(713769600, 3, full, fix)

Output:

***************************************************************

TABLE: authors OBJID = 553769030

PARTITION ID=713769600 FIRST=610281 ROOT=610280 SORT=0

Indid : 3, partition : 713769600. 47 Index pages allocated and 7 Extents

allocated.

TOTAL # of extents = 7

Alloc page 610048 (# of extent=2 used pages=9 ref pages=9)

Alloc page 610304 (# of extent=5 used pages=39 ref pages=39)

Total (# of extent=7 used pages=48 ref pages=48) in this database

DBCC execution completed. If DBCC printed error messages, contact a user

with System Administrator (SA) role.

dbcc tablealloc Syntax

The tablealloc check can be performed at the partition level when

passed partition_id. The tablealloc command at the partition level will

also perform checks on all of the local indexes for the stated

partition_id.

dbcc tablealloc(object_name | object_id | partition_id,

[,{full | optimized | fast | null} [,fix | nofix]])

Example:

dbcc tablealloc(569769087, full, fix)

Output:

***************************************************************

TABLE: authors OBJID = 553769030

PARTITION ID=569769087 FIRST=16642 ROOT=75755 SORT=0

Data level: indid 0, partition 569769087. 1002 Data pages allocated and 126

Extents allocated.

Chapter 2: System Maintenance Improvements | 37

Page 69: New.features.guide.to.Sybase.ase.15

TOTAL # of extents = 126

Alloc page 16640 (# of extent=2 used pages=16 ref pages=16)

Alloc page 30464 (# of extent=1 used pages=8 ref pages=8)

Alloc page 34816 (# of extent=1 used pages=8 ref pages=8)

.

.

Alloc page 75008 (# of extent=9 used pages=72 ref pages=72)

Alloc page 75264 (# of extent=10 used pages=80 ref pages=80)

Alloc page 75520 (# of extent=8 used pages=60 ref pages=60)

Total (# of extent=126 used pages=1004 ref pages=1004) in this database

DBCC execution completed. If DBCC printed error messages, contact a user

with System Administrator (SA) role.

Reorg Partitions

ASE 15 enables the reclaim_space, forwarded_rows, and compact

parameters of the reorg command to run on a single partition. The

reorg rebuild command can also be run against single index

partitions.

Benefits of Reorg Partitions

Traditionally, reorg operations on the largest of tables is a very time-

consuming operation for ASE servers. Additionally, reorg operations

have only been permitted to operate at the table or index level. Reorg

at the partition level is a new feature for ASE 15.

Using reorg at the partition level provides the database adminis-

trator a mechanism to divide the reorg tasks for very large tables to

multiple maintenance windows. Additionally, if the database admin-

istrator knows only a subset of the table or index’s partitions are

fragmented, reorg operations can be directed to only those partitions

where reorgs are necessary.

Syntax:

reorg forwarded_rows table_name partition partition_name [with {resume,

time = no_of_minutes}]

reorg reclaim_space table_name [index_name] partition partition_name

with {resume, time = no_of_minutes}]

reorg compact table_name partition partition_name with {resume, time

= no_of_minutes}]

reorg rebuild table_name [index_name [partition index_partition_name]

38 | Chapter 2: System Maintenance Improvements

Page 70: New.features.guide.to.Sybase.ase.15

where partition_name is the name of the partition that contains the

table on which you are running reorg, and index_partition_name is

the name of the partition index.

Note: For ASE 15 GA, reorg rebuild works at the table level, even when

passed a partition name as an input parameter. The ability to perform

reorg rebuild at the partition level may be available in a future IR release

of ASE 15.

Examples:

Run reorg rebuild on the rental_idx local index of the Rental table.

The rental_idx2 local index resides on the rental_idx2_681769486

system-named index partition:

reorg rebuild Rental rental_idx2 partition rental_idx2_681769486

Run reorg reclaim_space on the Rental table’s clusered index. This is

a local index, so perform the reorg on only the portion of the local

index that is specific to the partition named range_part1:

reorg reclaim_space Rental rental_clustered_idx partition range_part1

Run reorg compact on the portion of the Rental table specific to the

range partition range_part2:

reorg compact Rental partition range_part2

Run reorg forwarded_rows on the portion of the Rental table specific

to the range partition range_part4:

reorg forwarded_rows Rental partition range_part4

Run reorg rebuild on the Rental table:

reorg rebuild Rental partition batch2

Note: Remember for ASE 15 GA, reorg rebuild on a specific partition

performs the reorg operation on the table as a whole.

Changes to the bcp Utility

The bcp utility is enhanced for ASE 15 to support bcp operations at

the semantic partition level. This is not the only improvement that

has occurred for the bcp utility. The capability of using the bcp utility

on computed columns has also been included. These enhancements

address pre ASE-15 limitations of the bcp utility.

Chapter 2: System Maintenance Improvements | 39

Page 71: New.features.guide.to.Sybase.ase.15

The limitations of the pre-ASE 15 version of bcp are:

� Can only specify one partition and one file in a bcp command

� The --maxconn parameter is not available

bcp for ASE 15 allows the database administrator to:

� bcp data in or out from any type of partitioned table: range, hash,

list, or round-robin. Parallel bcp supports partitions in both slow

and fast modes.

� Bulk copy data out from all partitions or a subset of partitions

� Bulk copy data out to a single file or partition-specific files

� Bulk copy data in from a single file, multiple files, or

partition-specific files

� Bulk copy data in parallel to specific partitions

The new syntax for bcp is in bold:

bcp [[db_name.]owner.]table_name [:slice_num] [partition pname] {in |

out} [filename]

[-m maxerrors] [-f formatfile] [-e errfile]

[-F firstrow] [-L lastrow] [-b batchsize]

[-n] [-c] [-t field_terminator] [-r row_terminator]

[-U username] [-P password] [-I interfaces_file] [-S server]

[-a display_charset] [-z language] [-v]

[-A packet size] [-J client character set]

[-T text_or_image size] [-E] [-g id_start_value] [-N] [-X]

[-M LabelName LabelValue] [-labeled]

[-K keytab_file] [-R remote_server_principal]

[-V [security_options]] [-Z security_mechanism] [-Q] [-Y]

[--maxconn maximum_connections] [--show-fi] [--hide-vcc]

where:

� slice_num — A number designating the partition from which you

are bulk copying data. This number ranges from 1 to the total

number of partitions in the table. For example, if the table has 15

partitions, the range for slice_num would be from 1 to 15.

slice_num is only valid for bcp in round-robin partitioned tables.

� partition pname — Specifies a comma-delimited set of one or

more unique partitions. This parameter can be used for both bcp

out and bcp in. This parameter cannot be used in conjunction

with the slice_num option.

40 | Chapter 2: System Maintenance Improvements

Page 72: New.features.guide.to.Sybase.ase.15

� filename — Specifies a comma-delimited set of one or more

datafiles. Can be specified for both bcp in and bcp out.

� maximum_connections — The maximum number of parallel

connections bcp can open to the server. If this is not specified,

bcp automatically determines the number of connections.

� show-fi — Tells bcp to copy functional index data. This parame-

ter can be used for both bcp out and bcp in.

� hide-vcc — Tells bcp not to copy virtual computed columns.

This parameter can be used for both bcp out and bcp in.

Examples:

Copies the Rental table to the rentals.dat file:

bcp Properties..Rental out rentals.dat

Copies the contents of the datafile rentals.dat file to Rental:

bcp Properties..Rental in rentals.dat

Copies the contents of partitions p2, p3, and p4 to the rentals.dat file:

bcp Properties..Rental partition p2, p3, p4 out rentals.dat

Copies rentals.dat into the first partition of the round-robin parti-

tioned table titles:

bcp Properties..Rental:1 in rentals.dat

Copies multiple datafiles into the Rental table:

bcp Properties..Rental in p2.dat, p3.dat, p4.dat

Copies Rental to a file:

bcp Properties..Rental out

Because a name is not specified in the command, the file is named

the same as the partition with a .dat extension added. For example, if

the table consists of the partitions p1, p2, p3, and p4, bcp creates four

output datafiles: p1.dat, p2.dat, p3.dat, and p4.dat. If the table is not

partitioned, the datafile is named after the single default partition that

comprises the table. The Tenants table below is non-partitioned, and

a bcp out with the following syntax will result in one bcp file, named

in conjunction with the single partition name for this table:

bcp Properties..Tenants out

Chapter 2: System Maintenance Improvements | 41

Page 73: New.features.guide.to.Sybase.ase.15

The following command specifies a subset of the partitions for the

Rental table. Since the command does not specify an output file but

does specify a subset of the partitions for the Rental table, bcp creates

two datafiles: p3.dat and p4.dat.

bcp Properties..Rental partition p3, p4 out

Usage

� For bcp out, if you specify both the partition pname clause and

filename clause, either:

• The filename parameter contains a single datafile and all data

will be copied to that file, or

• There is a one-to-one mapping between the partition names

and the datafiles, and the data from a partition will be copied

to its specific datafile.

� If a specified partition does not exist, the bcp out command fails

and does not copy out any of the partitions.

� You cannot use partition_id with bcp out.

� For bcp out, if you do not specify a file name, each partition’s

data is copied to a file that is named after the partition and

appended with a .dat extension.

� For bcp in, if you do not specify any partitions, you can list

multiple data files. If you specify partitions, you must include a

one-to-one mapping between the partition names and the

datafiles.

When you bcp data into a table, the input file can contain data for one

or more partitions. Clients can send data across parallel connections,

and each connection is a dedicated channel for a set of one or more

partitions.

If the bcp client has partition-level information to map files to

partitions, bcp can initiate dedicated connections for each partition

into which data is being copied.

bcp: Client-side Parallelism

The bcp client determines the number of connections to the server.

For maximum parallelism, this includes a separate connection for

each partition to the file mapping provided as parameters to the bcp

command.

The maxconn value indicates the maximum number of connec-

tions the bcp client is allowed to set up. Without this parameter, the

42 | Chapter 2: System Maintenance Improvements

Page 74: New.features.guide.to.Sybase.ase.15

bcp client automatically chooses the number of connections. One

connection per partition is ideal, but if that is not possible, bcp will

multiplex batches for more than one partition across a connection.

The batch_size option for bcp in applies to each file that is cop-

ied into a table, regardless of whether the data goes into a single

partition or is distributed across multiple partitions.

Truncate Partitions

With ASE 15, truncate table has been enhanced to deallocate the data

from ASE 15 partitions. Truncate partitions can be performed on the

default round-robin style partition in addition to tables semantically

partitioned.

As with truncations at the table level, pages are deallocated with

truncate, but only at the partition level. Similar to truncate table,

using the partition argument does not log the deletes one row at a

time, and is much faster than a delete with a where clause to delete

data from a single partition.

Note: Truncate table at the partition level can only truncate one partition

at a time.

The syntax is:

truncate table [[database.] owner.] table_name [partition

partition_name [,partition_name] ...]

Truncate the data from a partition that is partitioned as a four-way

hash partitioned table.

Prior to truncate:

partition_name partition_id pages

p1 1433772165 131

p2 1449772222 129

p3 1465772279 132

p4 1481772336 130

Truncate syntax:

truncate table authors partition authors_1449772222

After truncate:

partition_name partition_id pages

p1 1433772165 131

Chapter 2: System Maintenance Improvements | 43

Page 75: New.features.guide.to.Sybase.ase.15

p2 1449772222 1

p3 1465772279 132

p4 1481772336 130

Note that after a truncate partition of the hash partitioned table, the

database administrator will need to consider partition rebalancing.

Looking Forward — Drop Partition

Note: Since drop partition is not available in the GA release of ASE 15,

use truncate partition where drop partition would otherwise be used.

In the future, the drop partition command may be included in ASE. If

so, some potentially interesting scenarios could occur. For example,

if a database administrator drops a partition and then later needs to

restore the data, re-adding the partition may get interesting, particu-

larly for range partitions. Think of what happens if you have a

semantic partitioned table by range <=10,000, <=20,000, etc., and the

database administrator drops the first partition due to archiving data.

If a subsequent insert of 5,000 occurs, it goes into partition number

two. Now if the DBA needs to re-add the range partition for data

<10,000 for any reason (e.g., unarchival), a row of data already will

be in place on partition two, which when using local indexes may not

be locatable and could become “missing” unless a table scan occurs.

Very Large Storage SystemASE 15 is ready for very large data-intensive environments. For ASE

15, it is possible to create up to 2,147,483,647 disk devices. Each of

these devices can be up to 4 terabytes in size.

Here is a breakdown of these numbers comparing pre-ASE 15

and ASE 15’s storage limitations.

Pre-ASE 15:

� 256 devices/DB

� Each device could be 32 GB in size

� DB limit was 8 TB

ASE 15 very large storage system:

� Devices = 2^31

� Maximum size = 4 TB

44 | Chapter 2: System Maintenance Improvements

Page 76: New.features.guide.to.Sybase.ase.15

� Supports the creation of a large number of databases of size 8 TB

� Greatly enhances the maximum overall storage limit of ASE

� The maximum overall storage supported by VLSS will be that

provided by 2^31 devices each of size 4 TB

� VLSS does not increase the theoretical maximum size of a data-

base, which currently stands at 8 TB (2^31 logical pages each of

size 16 KB)

Disk Init

In order to allocate disk devices to ASE, the disk init command is

utilized. With ASE 15, the disk init command no longer requires the

vdevno parameter in order to create an ASE device, making the man-

agement of ASE simpler. However, the disk init command will still

accept the vdevno parameter provided the following conditions are

met:

� The vdevno number specified is not already used

� The vdevno number is within the range of allowed values (0 to

2,147,483,647)

� Upper limit specified by the configuration parameter number of

devices has not been obtained

� The vdevno specified cannot be 0 as this is reserved for the mas-

ter device

If vdevno is not passed as an input parameter to the disk init com-

mand, ASE will select a vdevno for the device that is higher than the

highest vdevno held in ASE. So, if the active vdevno numbers uti-

lized are as follows: 0, 1, 2, 3, 4, 12, 34, 56, 57, 58, 75, then ASE will

assign a vdevno for the next device created, where vdevno is not

specified as an input parameter, to be 76 or greater for this example.

Large Identifiers

There are new limits for the length of object names and identifiers:

255 bytes for regular identifiers and 253 bytes for the delimited iden-

tifiers. The new large identifier applies to most user-defined

identifiers including table name, column name, index name, and so

on. Due to the expanded limits, some system catalogs and built-in

functions have been expanded.

Chapter 2: System Maintenance Improvements | 45

Page 77: New.features.guide.to.Sybase.ase.15

For the #temporary table, the large identifier can have a maxi-

mum of 238 bytes since the suffix for the temporary table is 17 bytes.

For variables, the “@” is counted as 1 byte and the allowed name for

it will be 254 bytes long.

Below are lists of identifiers, system catalogs, and built-in func-

tions that are affected by the new limits. The pre-ASE 15 restriction

was 30 bytes.

Long Identifiers

Short Identifiers

(The maximum length for these identifiers remains 30 bytes.)

46 | Chapter 2: System Maintenance Improvements

Application context name

Cache name

Column name

Constraint name

Default name

Function name

Index name

JAR name

LWP or dynamic statement name

Procedure name

Rule name

Table name

Time range name

Trigger name

User-defined datatype

Variable name

View name

Application name

Character set name

Cursor name

Database name

Engine name

Execution class name

Group name

Host name

Host process identification

Initial language name

Logical device name

Login name

Password

Quiesce tag name

Segment name

Server name

Session name

User name

Page 78: New.features.guide.to.Sybase.ase.15

Unicode Text SupportNew in ASE 15, Sybase includes unitext support. The three datatypes

associated with Unicode support are:

� unichar — Similar to char columns, introduced in 12.5

� univarchar — Similar to varchar() columns, introduced in 12.5

� unitext — Similar in treatment to text columns in ASE

Unicode datatypes can be used to migrate legacy data to ASE.

Additionlly, Unicode data allows for conversion between datatypes,

which can be useful for foreign language-based installations. All

ASE character sets can be converted to Unicode, and Unicode can

thereby be converted to any ASE character set.

New Datatypes� bigint — Supports signed integers from

–9,223,372,036,854,775,808 to +9,223,372,036,854,775,807

� unsigned <xx> — Allows for range extension on the ASE integer

datatypes of smallint, int, and bigint

Ranges for unsigned datatypes:

Datatype Range

unsigned smallint 0 to 65,535

unsigned int 0 to 4,294,967,295

unsigned bigint 0 to 18,446,744,073,709,551,615

For unsigned datatypes, it is valid to also use unsigned tinyint, as this

is syntactically supported. However, unsigned tinyint will function as

a standard tinyint.

Note: The range for each of the unsigned datatypes doubles on the pos-

itive side of zero since it is not possible to declare a negative number

with unsigned integer variables. The storage size for each of the

unsigned datatypes remains the same as the signed version of the corre-

sponding datatype.

Chapter 2: System Maintenance Improvements | 47

Page 79: New.features.guide.to.Sybase.ase.15

New Functions� audit_event_name() — Returns the event name for an audit

event

Syntax:

audit_event_name(event_id)

Example:

select * from sybsecurity..sysaudits_01 where

audit_event_name(event) = "Drop Table"

select audit_event_name(event) from sybsecurity..sysaudits_01

� biginttohex() — Converts integer values to hexadecimal values

Syntax:

biginttohex(integer_expression)

Example:

select biginttohex(123451234512345)

Output:

----------------

000070473AFAEDD9

� count_big() — Similar to the count aggregate function, but has

higher signed upper limit of 9,223,372,036,854,775,807 for

counts. For comparison purposes, the preexisting count() func-

tion’s limit for a signed integer is 2,147,483,647.

Syntax:

count_big([all | distinct] expression)

Example:

select count_big(identification) from authors

Output:

--------------------

22500

� datachange() — See the section called “Datachange” in this

chapter

48 | Chapter 2: System Maintenance Improvements

Page 80: New.features.guide.to.Sybase.ase.15

� hextobigint() — Converts large hex values to large integer values

Syntax:

hextobigint(hexadecimal_string)

Example:

select hextobigint("12345fffff")

Output:

--------------------

78188118015

� is_quiesced() — Returns 1 if database is quiesced, 0 if database

is not.

Syntax:

is_quiesced(dbid)

Example:

select is_quiesced(5)

Output:

-----------

0

� partition_id() — Returns the partition ID for a specific data or

index partition

Syntax:

partition_id(table_name, partition_name [,index_name])

Example:

select partition_id("authors_hash", "p2")

Output:

-----------

1449772222

� partition_name() — Returns the partition name for a specific data

or index partition ID

Syntax:

partition_name(indid, ptnid [,dbid])

Chapter 2: System Maintenance Improvements | 49

Page 81: New.features.guide.to.Sybase.ase.15

Example:

select partition_name(3, 681769486)

Output:

------------------

rentals_idx2_part3

� tran_dumptable_status() — Returns an indicator to indicate if a

database can have a dump tran executed against it. A return of

zero (0) indicates a dump tran is allowed. A non-zero return indi-

cates a dump tran is not allowed.

Syntax:

tran_dumpable_status("database_name")

Example:

select tran_dumpable_status("Events")

Output:

-----------

120

Deprecated FunctionsAs with any changed function or system-generated output, there may

be an impact on any custom routines that rely on or scrutinize the

specific function. When a function is deprecated, it is not fully

removed from ASE. A deprecated function no longer functions in the

way it was designed; instead, the functions will return only a zero.

This deprecation, as opposed to full removal from ASE, keeps

third-party applications many database administrators use from

crashing due to references to invalid functions.

� data_pgs — Deprecated and replaced with the data_pages

command

� ptn_data_pgs — Deprecated and replaced with the data_pages

command

� reserved_pgs — Deprecated and replaced with the

reserved_pages command

� row_cnt — Deprecated and replaced with row_count

50 | Chapter 2: System Maintenance Improvements

Page 82: New.features.guide.to.Sybase.ase.15

� used_pgs — Deprecated and replaced with the used_pages

command

Tip: It is always a good practice to thoroughly test all functions and

keywords before migrating a production system and the underlying sup-

port software to a new database server release. This is especially true for

systems that employ custom system procedures that are based on

Sybase system procedure code excerpts, or with many of the third-party

tools used to administer ASE.

New Configuration ParametersASE 15 contains nine new parameters compared to ASE 12.5.3.

Parameter Name Description

default xml sortorder Defines the sort order used by the XML engine

enable metrics capture Instructs ASE to capture QP Metrics at the server level

enable semantic partitioning Enables the separately licensed feature of semantic

partitions in ASE

max repartition degree Specifies the maximum amount of dynamic repartitioning

that ASE can perform

max resource granularity Specifies a maximum amount of system resources that a

single query can use in ASE. At this time, the effect of this

parameter is only felt by the procedure cache. Additionally,

the values specified serve as an optimization guide, and not

a hard ceiling.

number of open partitions Maximum number of partitions that ASE can access at one

time

optimization goal Controls the server-level optimization strategy

optimization timeout limit At the server level, the maximum amount of time as a

percentage of total query time that can be spent in the

optimization stage of query processing

sysstatistics flush interval The number of minutes between flushes of sysstatistics

Chapter 2: System Maintenance Improvements | 51

Page 83: New.features.guide.to.Sybase.ase.15

Eliminated Configuration ParametersASE 15 contains no eliminated configuration parameters compared

to ASE 12.5.3.

New Global VariablesThe new ASE 15 global variables are listed below with a brief

description. The global variable @@setrowcount is introduced to

provide more useful information to the user. A pair of global vari-

ables, @@cursor_rows and @@fetch_status, were introduced to

support scrollable cursors.

Variable Description

@@cursor_rows Number of cursor rows in the cursor result set, when fully

populated

@@fetch_status Reports on success or failure of fetch from cursor

@@setrowcount Returns value for set rowcount at session level

@@textdataptnid Partition ID of a text partition referenced by @@txtptr

@@textptnid Partition ID of a data partition referenced by @@txtptr

SummaryBy now you will have noticed that there are sufficient material

enhancements and changes that are relevant to the topic of “system

administration” to constitute a full volume in itself. From Chapter 2,

one can gain at least a basic understanding of the progress made in

system administration with the ASE 15 release. In addition, the

examples illustrate the power of many of the new maintenance tech-

niques made possible with the introduction of semantic partitions,

which are covered in detail in Chapter 3.

52 | Chapter 2: System Maintenance Improvements

Page 84: New.features.guide.to.Sybase.ase.15

Chapter 3

Semantic Partitions and VeryLarge Database (VLDB)Support

This chapter addresses the following areas of partition management:

� Partition terminology

� Types of partitions supported by ASE 15

� The new partitioning features introduced with ASE 15

� How the new partitioning features benefit the DBA

� How partitioning affects the total cost of ownership (TCO)

� A recommended approach to implementing semantic partitions

IntroductionWhen Sybase first introduced table partitioning in ASE 11.0, the goal

was to reduce the last page contention on tables and page chains

where all of the data was being added to the end of the chain. The last

page contention included last page lock contention. This was referred

to as table slicing or equi-partitioning, whereby the table was divided

into equal-sized parts or slices. Sybase later introduced row-level

locking in ASE 11.9 to further address the page chain contention by

locking the row instead of the page. Until ASE 15, index partitioning

was not even addressed.

As data volumes grow because of retention requirements, tradi-

tional online transaction processing (OLTP) databases are fast

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 53

Page 85: New.features.guide.to.Sybase.ase.15

becoming very large databases (VLDBs). In today’s business envi-

ronment, what were once OLTP databases are now becoming mixed

workload databases as these databases transition from OLTP data-

bases to decision support system (DSS) databases. With ASE 15,

Sybase has devoted a lot of effort in order to address the VLDB

issues in the perspectives of OLTP, DSS, and mixed workload

databases.

Some of the challenges faced by the DBA in VLDB environ-

ments include:

� Very large databases are characterized by large volumes of data

in one or more tables. Large data tables usually need large

amounts of data processing resources to resolve scan type

queries.

� As the data volume grows, so will the need for data maintenance

windows. The time required for maintenance activities on large

tables increases with the volume of data in tables.

� In 24 x 365 processing environments, off-peak usage time is

shrinking, thereby shrinking or eliminating existing DBA main-

tenance windows. Shrinking maintenance windows affect DBA

activities and their ability to effectively manage the databases

and the database environment.

� Some database administrative maintenance activities make sys-

tems unavailable for other activities, especially when the volume

of data to be maintained is large and ever growing.

� In a mixed workload environment, DSS applications adversely

affect OLTP systems. Similarly, OLTP applications suffer per-

formance when a DSS application is expected to access shared

data from large tables.

Semantic partitioning in ASE 15 addresses many of the VLDB chal-

lenges by introducing and enhancing:

� Data partitioning

� Index partitioning

� Parallelism

Data partitioning has the potential for reducing the resource require-

ments for queries by accessing only a subset of data in a table by

utilizing partition elimination and directed joins at the data partition

level. These concepts are detailed later in this chapter. Index parti-

tioning reduces physical I/O by reducing the number of index tree

54 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 86: New.features.guide.to.Sybase.ase.15

levels that have to be traversed to resolve selective queries. The fewer

the levels of index traversal, the faster the query response. Although

global indexes may be defined on a table, parallel index scans will

now be possible if a local index is defined on the corresponding data

partition.

In addition to core partitioning concepts, the strategies employed

using partitions can help select, delete, update, and insert commands

on the targeted data by selectively processing the data in smaller

chunks. For example, instead of running a 24-hour delete operation to

purge “older” data, it will be possible to truncate that partition if an

appropriate partitioning strategy is employed. DSS applications can

operate on “old” or “archival” partitions, and OLTP applications can

work on “hot” or “active” partitions. In VLDB environments, shrink-

ing DBA maintenance windows often force maintenance activities to

be postponed or ignored. ASE 15 partitioning is a direct answer to

these issues. Partitioning allows the DBA to perform maintenance

tasks at the partition level instead of always at the full table level.

ASE 15 partitioning also has the added benefit of reducing the total

cost of ownership.

Why Partition Data?

Partitioning is a data placement strategy whereby large sets of data

are broken down into manageable chunks or partitions. Data can be

processed in multiple partitions simultaneously, achieving higher lev-

els of performance and scalability. Similarly, DBA activities can be

concurrently processed across one or more partitions, thereby

increasing data availability and decreasing system downtime.

The goals of partitioning are often seen as twofold. First, it is

meant to provide a method of spreading data that increases perfor-

mance as well as addresses scalability. This was originally

accomplished using striping with previous versions of ASE.

The second goal is to manage data in such a manner as to allow

for data exclusion when querying large portions of data that have

similar data with differing demographics. For example, if you sell

products to five different regions you might want to be able to query

each region as if the data were located on separate disk devices. By

partitioning data with similar meaning or semantics (such as region,

zip code, date of access), the access paths for the required search

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 55

Page 87: New.features.guide.to.Sybase.ase.15

arguments specified in the where clause will only be considered;

excluding costly I/O requests to the entire physical disk.

Benefits of Partitioning

When Sybase first introduced partitioning, there were two immediate

benefits. First, partitioning reduced contention on the “hot spot” at

the end of the table’s page chain. If a table did not have a clustered

index, all inserted rows were added to the end of the page chain. This

hot spot was a major performance bottleneck for applications where

large amounts of data were being added to a table. The partitioning

addressed the hot spot since ASE would randomly choose a partition

for each transaction. If any rows were inserted by the transaction,

ASE would place them on the assigned partition. This type of parti-

tioning is called non-semantic partitioning. Second, the partitioning

provided the optimizer with the ability to determine how many paral-

lel worker processes could be considered for use when performing a

table scan on a partitioned table. This type of partitioning is called

horizontal partitioning.

With ASE 15, partitioning has been extended to include defin-

able partitioning strategies. This vertical partitioning is based on

several industry-accepted partitioning methods. This is a significant

benefit for users who have very large tables where table scans are a

major performance issue. When partitioning is combined with paral-

lel worker processes and segment placement, high degrees of

performance improvement are possible.

Prior to Sybase ASE 15, data skew has been an issue. The pro-

vided partition strategies will also address the issue of data skew. The

problem with data skew previously was that as the equi-partitions

became unbalanced, the effectiveness of parallel queries became

much less — and eventually not advisable. In ASE 15 with semantic

partitioning, the optimizer is able to use the semantics of the data val-

ues used in the partitioning keys to determine more appropriate

parallel query algorithms, effectively eliminating data skew as a con-

cern. This is important as the very semantics of the data is likely to

lead to skew. For example, if partitioning customers by state, the par-

titions are likely going to be very skewed as the population of more

populous states such as California will be larger than small or

sparsely populated states such as Idaho.

56 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 88: New.features.guide.to.Sybase.ase.15

With ASE 15, the ability to partition on local indexes allows for

relevant portions of the index to be used when they are directly pro-

portionate to the table partition. This reduces the amount of I/O

required to interrogate the index pages when only a portion of the

index is necessary to resolve the query.

When your ASE server is upgraded to ASE 15, table partitioning

will be the default. An entry will be added to the syspartitions table

for each table and table/partition combination. Although partitions

are not required to be specified at table creation, the table will be

created with one partition if none are specified. The new

ASE_PARTITIONS license is not necessary in order for you to cre-

ate tables. It only becomes required if you decide to use the new

semantic partitioning. Licensing is discussed in the section in this

chapter called “Configuring ASE for Semantic Partitioning.”

One major issue facing many companies is the management of

historical data. As data ages, fewer queries are directed at the aged

data. As the data ages to the point where it is no longer useful, it has

to be purged. In most cases, purging of aged data can be a resource

and performance problem. By partitioning data appropriately, new

functionality within ASE 15 can ease the maintenance associated

with aged data. Truncating partitions is one such option for removing

aged data easily.

Vertical partitioning of data and indexes is a major step forward

for Sybase. Vertical partitioning addresses many performance issues

associated with very large tables. In data warehousing terminology,

fact tables will benefit from partitioning. The dimensions on which

fact tables are built lend themselves to data partitioning. By combin-

ing partition segmentation with data and index segmentation, DSS

applications should be able to achieve acceptable performance levels.

Partition Terminology

Sybase has introduced several new terms into our partitioning vocab-

ulary. Many of the terms are known in the industry. However, in

order to understand how partitioning is enhanced, an understanding

of the underlying terminology is necessary.

Semantic partition — A method of dividing a domain according to

the principles of what the data means.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 57

Page 89: New.features.guide.to.Sybase.ase.15

Intra-operator parallelism — A technique used by database man-

agement systems to partition the data workload across multiple

processing units to allow for parallelization of the query

resolution.

Data partition — The basic building block for an ASE 15 table. All

tables, whether defined with one or multiple partitions, are com-

posed of at least one data partition. With ASE 15, if partitioning

is not explicitly defined, the first allocation of table is defaulted

to be partition 1.

Index partition — A portion of an index that contains a subset of the

index leaf pages. The root page for the index is located in the

first extent allocated for the index. Remember that you can

define a particular segment where this will be allocated. This

may be different for indexes on APL or DOL tables since there

are some low-level differences in index structure and page allo-

cation for these locking schemes. In any case, the value of the

root page is stored in the rootpage column of the syspartitions

table. In pre-ASE 15, the page number for the root page was

stored in the root column in the sysindexes tables. The index

partition will have a unique partition ID in the syspartitions table.

A partitioned index cannot exist for a table that only has one par-

tition. In the example below, note that the partition ID does not

change given that the segment number is 1 for the table and its

indexes.

Example:

name indid id partitionid segment

rental_1904006783 0 1904006783 1904006783 1

rental_fbi_idx_1904006783 2 1904006783 1904006783 1

CoveringIndex_1904006783 3 1904006783 1904006783 1

Partition key — The key that determines the partition assignment on

which a data or index row will be maintained. Each table can

have up to 31 columns that define the partition key. Sybase cur-

rently recommends that the number of columns be limited to four

or less. Partition keys with more than four columns increase the

complexity associated with defining and managing the support-

ing partitions. Partition keys are ordered internally according to

ANSI SQL2 comparison rules. When evaluating partition keys to

determine which partition to use, ASE uses a vector method to

58 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 90: New.features.guide.to.Sybase.ase.15

find the corresponding partition columns with datatype bit, text,

or image; Java classes and computed columns are not allowed to

participate in defining a partition key. In order for a partition key

to be effective, the columns should be found as part of a where

clause in queries that have a high frequency of use or a high pri-

ority of use. If the columns are found in many locations

throughout the application’s SQL queries and stored procedures,

it would also be a good partition key candidate.

Partition bound value — The beginning or ending value of a range

in a range-partitioned table. The beginning bound value of each

subsequent partition has to be greater than the ending bound

value of the preceding partition. The following example shows

two possible range combinations: one for numeric data and one

for character data. However, as already explained, range parti-

tions are defined with only an ending bound value. Hence, it is

possible to add a new range partition at the top of the range.

Beginning

Bound

Ending

Bound

Beginning

Bound

Ending

Bound

Partition 1 0 100 Partition 1 a f

Partition 2 101 201 or Partition 2 g m

Partition 3 n s

Partition 4 t z

Lower bound — In a range-partitioned table, this is the lowest value

of the partition key that can be assigned to the applicable parti-

tion. In the example of the partition bound value, it is shown as

the beginning bound. The lower bound does not need to be

defined on the first partition. If it is not specified, the lower

bound is the lowest possible value for the partition key

datatype(s).

Upper bound — In a range-partitioned table, this is the highest value

of the partition key that can be assigned to the applicable parti-

tion. In the example of the partition bound value, it is shown as

the ending bound. The final partition can contain the keyword

MAX, which indicates that all rows with a partition key value

higher than the upper bound of the next-to-last partition are

placed in this partition.

Equi-partition — In reference to multi-processor machines, equi-

partition is an algorithm that partitions a processor evenly for job

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 59

Page 91: New.features.guide.to.Sybase.ase.15

execution. In reference to data or index partitioning, it is a set of

rows that exhibit equal distribution across the number of defined

partitions. This characteristic is exhibited in pre-ASE 15

partitioning.

Global index — This is an index that references data across all parti-

tions. The index can be clustered or nonclustered. This type of

index is not new to ASE 15. The physical location of the index

can be on any partition — independent of it being clustered or

nonclustered. Global clustered partitioned and non-partitioned

indexes can be created on round-robin partitioned tables.

Local index — This index references data on a specific partition.

The index can be clustered or nonclustered. This type of index is

new to ASE 15. The index is physically located on the partition

for which it is defined. Clustered indexes are always created as

local indexes on range, hash, and list partitioned tables. Clustered

indexes can be created as local on round-robin partitioned tables.

Prefixed partitioned index — A prefixed partitioned index is one

where the first column of the index definition is the same column

as the first column of the partition key.

Example:

create table PartitionedTable

( ColumnA int,

ColumnB char(4),

IndexColumn1 int,

IndexColumn2 varchar(20),

ColumnC int )

partition by range (IndexColumn1)

...

create index prefixedPartitionedIndex

on PartitionedTable as (IndexColumn1, IndexColumn2)

Non-prefixed partitioned index — A non-prefixed partitioned

index is one where the first column of the index definition is not

the same column as the first column of the partition key.

Example:

create table PartitionedTable

( ColumnA int,

ColumnB char(4),

IndexColumn1 int,

IndexColumn2 varchar(20),

ColumnC int )

60 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 92: New.features.guide.to.Sybase.ase.15

partition by range (IndexColumn1)

...

create index non_prefixedPartitionedIndex

on PartitionedTable as (IndexColumn2)

Partition ID — The ID assigned by ASE that uniquely identifies the

partition. The ID is stored in the syspartitions table as the

partitionid column. For a data partition, the ID is also stored in

the data_partitionid column.

Example:

name indid id partitionid segment data_partitionid

rental_1904006783 0 1904006783 1904006783 1 1904006783

rental_fbi_idx_1904006783 2 1904006783 1904006783 1 0

CoveringIndex_1904006783 3 1904006783 1904006783 1 0

partition_id() — A new system function that returns the partition ID

for a specified table and partition.

Semantic PartitionsWith ASE 15, partitioning has been extended to include new partition

types. In addition to reducing the page lock contention, the new parti-

tioning schemes extend the data access paths by defining partitions

based on application needs. For example, it would be great to be able

to define a table with zip codes where the customer’s data would be

placed in the corresponding partition for the zip codes instead of par-

titioning the table into a number of unrelated partitions. Also, ASE 15

partitions, for the most part, can be defined so that they are self con-

tained with their local indexes. Therefore, a search operation can be

performed on the indexed portion and thus a former need to split

large tables into many small tables is no longer necessary. It is evi-

dent that splitting a database with large tables into many databases

with small tables exacerbates database management problems. For

example, recreating a clustered index or running reorg rebuild opera-

tions were predicated with a pre-ASE 15 database with a requirement

of extra free space of 20% of the object size. If you split a 500 GB

database into 50 databases each of 10 GB, you would be wasting 100

GB additional disk space to perform clustered index maintenance

activities because of the aforementioned space requirement. On the

other hand, if 50 partitions of 10 GB each were to be used, you need

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 61

Page 93: New.features.guide.to.Sybase.ase.15

only 2 GB of extra disk space to manage partition-based local

indexes. ASE 15 semantic partitioning allows you to reap all of the

benefits of multiple databases while reducing the requirement for

large amounts of reserved free space.

Another issue is that query processing problems arise as table

sizes increase. When a large table is split into many small tables and

placed across several databases in the same or different dataservers,

join query performance becomes a premium. With ASE 15 partition-

ing, if the same large table is split into many meaningful partitions,

joins may only be local to the table partition. If parallel access to par-

titions can be achieved, the response times may even exceed client

expectations. Semantic partitioning is the answer to the issue of ever

exploding data volume.

Semantic partitions in ASE 15 will have some meaning attached

to each slice. The industry standard partitioning in ASE 15 is not just

intended to manage space, but caters to the customer needs for faster

access to the data in an ever increasing data volume environment. In

a very large database with round-the-clock data processing scheduled

against large tables, database management activities like dbccs,

update statistics, table reorgs, bcps, or table truncation can be time

consuming and often result in either application downtime or perfor-

mance degradation. Sybase has now extended the functionality of

partitioning to include maintenance operations that in earlier releases

were only possible at the table level. This increases the application

availability, makes DBA tasks more efficient, and decreases total cost

of ownership (TCO). Semantic partitioning addresses these issues as

each of these operations can now be executed at the partition level or

at the table level. See Business Case 1 in Appendix B for an example

of when to use partitioning.

Configuring ASE for Semantic Partitioning

As part of the upgrade process or when you want to use semantic

partitions, two actions need to be taken. First, you have to acquire a

license for Semantic Partitioning. This can be accomplished by con-

tacting Sybase directly or through an account representative. Second,

the following two new configuration options need to be activated:

62 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 94: New.features.guide.to.Sybase.ase.15

� number of open partitions — This option should be set high

enough to allow opening multiple tables with multiple partitions.

Typically this number should be greater than the total number

of open objects and number of open indexes. The use and

effect of this option can be monitored using sp_sysmon or

sp_monitorconfig. You may also use sp_countmetadata to

estimate the configuration value.

� partition spinlock ratio — This value is set by default to 10,

meaning there is one spinlock for every 10 open partitions. The

default setting is adequate for most environments.

Partition Support in ASE 15

All tables will now be defined with one partition at the time they are

created if no partitioning method is specified. This behavior is new to

ASE 15.

The following list highlights the types of partitioning now avail-

able with ASE 15 and the operational changes that now take

advantage of or support partitioning.

� Non-semantic, round-robin partitioning continues to exist from

prior releases.

� New ASE 15 semantic partitioning methods are:

• Hash-based partitioning

• Range partitioning

• List partitioning

� Index partitioning:

• Global indexes

• Local indexes

� Partition-aware query processor engine — Takes advantage of

partitions and underlying disk devices/segments, and provides

highly scalable performance gains.

� Data placement for I/O parallelism — Individual partitions or

indexes can be placed on separate devices/segments.

� Partition-level operations — dbccs, update statistics, reorgs, table

truncations, and bcps can selectively be done on partitions. The

level of concurrency that can be expected in a partitioned object

environment is shown in the following diagram.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 63

Page 95: New.features.guide.to.Sybase.ase.15

In an unpartitioned table, differences in application requirements

and database management activities collide, often resulting in

poor performance or downtime. In a partitioned environment,

both can coexist simultaneously. In addition, disk layout strate-

gies, I/O planning, and parallelism may have a significant

positive impact on partitioning gains.

� Building block for inter-table parallelism — The query processor

utilizes partition-level information when determining how to pro-

cess joins.

Partition Types

The pre-15 partitioning is now referred to as round-robin partitioning.

This continues to be the only non-semantic and unlicensed partition-

ing scheme.

ASE 15 extends the partitioning with three new licensed parti-

tioning methods.

� Range partitioning

� Hash partitioning

� List partitioning

64 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

The effect of DBA utilities and query processing on partitioned and unpartitioned tables

Partitioned TableUnpartitioned Table

Large Table updatestatistics

reorgs

data load

truncate

partition1this month’s data

partition2

partition3

partition4

last month’s data

two months old

13 months olddata load

largedeletes

DSS QUERYon historical data

OLTP

updatestatistics

reorgs

OLT

P

DSSQUERY

Figure 3-1

Page 96: New.features.guide.to.Sybase.ase.15

Range Partitioning

This is a partitioning strategy that places data and index pages on a

partition based on the value of the partition key as it relates to the

range of values that are assigned to a particular partition. The data

values of a range partition cannot overlap. Rows placed in a range

partition are deterministic and support partition exclusion by the

optimizer when it is cost estimating and selecting the access paths

and devices/CPUs to be used to resolve the query. By allowing for

data exclusion or partition elimination, performance improvements

can be gained because unnecessary partitions are not utilized. In

order to get additional performance gains, segmenting the partition

across multiple devices and associating segments with partitions

allows parallel I/O processing to occur.

Range partitions are defined by specifying an upper bound parti-

tion key value for each partition defined to the table. The lower

bound of the partition range is one unit more than the upper bound of

the previous range partition. In the cases where a table that contains

range data is first being created, only creating one partition sets the

table up to be altered to add additional partitions as they become nec-

essary. Consider, for example, sales data for a region. If the table is

futuristic (i.e., allowing for growth in the company), the first partition

may be allocated for the initial sales region. As the company grows

and new sales regions are defined, the database administrator can add

another partition by simply altering the table. Example 1 shows the

command used to create a one-range partition table followed by the

output of sp_helpartition table-name.

Example 1:

create table RangeTable

( SalesRegion int,

TotalSales char(5),

)

partition by range (SalesRegion)

(Region1 values <= (100))

go

name type partition_type partitions partition_keys

---------------- ---------- -------------- ----------- --------------

RangeTable base table range 1 ColumnA

(1 row affected)

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 65

Page 97: New.features.guide.to.Sybase.ase.15

partition_name partition_id pages segment create_date

-------------- ------------ ----------- ------- --------------------------

value1 684526441 1 default Apr 24 2005 9:08PM

Partition_Conditions

--------------------

VALUES <= (100)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- ------------------ -------------------

1 1 1 1.000000 1.000000

(return status = 0)

Given the following scenario for disk layout, Example 2 shows parti-

tion data placement on different segments.

Example 2:

create table AlterPartitionKeyColumn

( ColumnA int,

ColumnB char (5),

ColumnC varchar (5),

ColumnD smallint,

ColumnE bit,

ColumnF datetime,

ColumnG numeric(10,2),

ColumnH tinyint,

ColumnI smalldatetime

)

partition by range (ColumnA, ColumnB)

(value1 values <= (100, "aaaa") on seg1,

value2 values <= (200, "hhhh") on seg2,

value3 values <= (300, MAX) on seg3)

go

66 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

value1:values <=20000

Seg1

value2:values <=40000

Seg2

value3:values <=60000

Seg3

Three disks Two disks One disk

Figure 3-2

Page 98: New.features.guide.to.Sybase.ase.15

sp_helpsegment seg1

go

segment name status

------- ---------------- ------

3 seg1 0

device size free_pages

------------------------------ ---------------------- -----------

seg_device1 20.0MB 10193

Objects on segment 'seg1':

table_name index_name indid partition_name

------------------------ -------------------------- ------ --------------

AlterPartitionKeyColumn AlterPartitionKeyColumn 0 value1

total_size total_pages free_pages used_pages reserved_pages

----------------- ------------- ------------- ------------- -------------

20.0MB 10240 10193 47 0

(return status = 0)

You can also get data placement information about all of the parti-

tions of a table using the sp_objectsegment system stored procedure.

sp_objectsegment AlterPartitionKeyColumn

go

Partition_name Data_located_on_segment When_created

-------------- ----------------------- -------------------

value1 seg1 Oct 1 2005 6:28PM

value2 seg2 Oct 1 2005 6:28PM

value3 seg3 Oct 1 2005 6:28PM

The following range definitions are examples of valid and invalid

range definitions:

Range Definition Description

(0-1000, 1001-2000, 2001-5000) Valid range definition

(A-D, E-K, M-Q, R-W, X-Z) Valid range definition

(A-M, J-Q, R-Z) Invalid range definition

(Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec) Valid range definition

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 67

Page 99: New.features.guide.to.Sybase.ase.15

When to Use Range Partitioning

One of the major issues with range partitions is data skew. For exam-

ple, assume you have a greeting card sales table partitioned by

month. If the query is on the monthly sales of Mother’s Day cards,

the month of May would hold the majority of the sales. The same

would be true if you wanted the monthly sales of Easter cards. The

majority of your data would be in the April partition, depending, of

course, on when Easter falls.

Partitioning in general addresses very large databases and ever

increasing data volume. Range partitioning in particular is the best

candidate to handle large tables since the goal of range partitioning is

to provide for partition elimination based on a range of data.

Range partitioning should be considered with:

� Time-based data (e.g., orders in every week) — Partition the

table using the week of the year.

Example:

partition by range(week)

partition1 values <= 1,

...

partition52 values <= 52

A value of MAX is also allowed for the value of the last partition.

Please note a MAX partition clause may not be necessary in this

example as there are just 52 weeks in a year even if it is a leap

year. The value of MAX should be considered where the upper

bound value is unknown for the last partition. ASE 15 allows

computed columns, columns for which data is generated during

run time. In ASE 15 computed columns cannot be used as parti-

tion keys. In this example, week of the year could either be a

materialized or nonmaterialized computed column, depending on

your application; however, this cannot be used as a semantic par-

tition key.

68 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 100: New.features.guide.to.Sybase.ase.15

� Data that is ordered (e.g., salary ranges)

Example:

partition by range(salary, employee_id)

partition1 values <= 20000,

partition2 values <= 40000,

...

partition5 values <= 80000,

partitionN values <= MAX (put all of the employees making more

than $80,000 in this partition)

If the partition with MAX value grows faster than other partitions,

resulting in an obvious partition skew, the partition cannot be split.

The only way to overcome this deficiency is to add one or more new

partitions and bcp the data out from this partition, truncate the parti-

tion that needs to be split, and reload the data into the new partitions.

You do not need to split the input files; the data will be loaded

according to the partition key.

Range partitioning allows for archival and truncation of data that

has aged out.

Range partitioning also has the benefit of permitting a mixed

workload environment to coexist. For example, if the table is parti-

tioned with the key “year,” then the current year’s data may be the

target for OLTP operations while previous years’ data can be used

for DSS operations. While such a database will assume the propor-

tions and characteristics of a VLDB, range partitioning helps

maintain performance objectives in OLTP, DSS, and mixed data pro-

cessing modes.

Application examples include:

� “Rolling-window” operations — Cyclical operations where the

same operation or event occurs at regular cycles, such as daily

sales for 31 days per month. After the 31st partition has been uti-

lized, the first partition can be used again.

� Historical data — Data grows continuously. No purge is per-

formed (assuming DSS proportions).

� Delete bank statements more than five years old — Massive

delete operations are performed.

In all of the above cases, the table is expected to have one or more

time-sensitive columns (e.g., date last signed on, date account closed,

transaction date, etc.). In each of these cases, the leading column of

the partition key is this date column. So, potentially you may end up

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 69

Page 101: New.features.guide.to.Sybase.ase.15

having 365 partitions if you want to track data for each day of the

year.

In a multi-gigabyte database table with millions of transactions

per measure of time, a logged operation — like the delete command

— would generate multiple problems. For example, to accommodate

a delete operation you need to:

� Manage space usage within the transaction log.

� Find a large enough processing window during off-peak time to

handle deleting the data.

� Accommodate high resource requirements for such deletes.

Data partitioning is a welcome rescue feature for customers who are

spending entire nights and weekends deleting old data. You now can

simply truncate the partitions that are no longer necessary. An opera-

tion that would otherwise take several hours to complete may now

only need a few minutes.

Range partitioning also allows for adding new ranges at the top

end of the range without reorganizing the rest of the table. As addi-

tional partitions are required, the table can be altered to add partitions

to the end of the partition chain.

Hash Partitioning

This is a partitioning strategy that places data pages on a particular

partition based on a hashing algorithm. The hashing algorithm is

based on the datatype and length and is managed internal to Sybase;

therefore, the hashing algorithm cannot be manipulated by the user.

However, altering a table and changing a partition key column’s

datatype may affect the partitioning. The goal of this partitioning

method is to attempt to equally balance the distribution of values

across the partition, eliminating data skew. As this is based on the

datatype, the assumption is based on the data being spread across the

entire domain of the datatype and in equal proportions. As a result,

even hash partitions are likely to have skew. The data has the charac-

teristic of being deterministic, similar to the range partition; however,

a hashing function is used to determine the partition where the data or

index row will be placed. Although hashing is deterministic, partition

elimination cannot occur for table scans on portions of the table —

for example, a range scan. The reason is that it is likely that the

70 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 102: New.features.guide.to.Sybase.ase.15

different values within the range will have different hash values and

consequently be spread across multiple partitions.

The following SQL code would create a table with four hash par-

titions. The diagram that follows illustrates how the hash partitions

can be split across devices.

create table sales_history

(store_id int not null,

ord_num int not null,

sale_date datetime not null,

item_num char(10) not null,

qty smallint not null,

discount float not null)

partition by hash(item_num)

(p1, p2, p3, p4)

go

1> sp_helpartition sales_history

2> go

name type partition_type partitions partition_keys

------------- ---------------- -------------- ----------- ----------------

sales_history base table hash 4 item_num

(1 row affected)

partition_name partition_id pages segment create_date

-------------- ------------ ----------- ------------- --------------------

p1 512001824 1 default Sep 18 2005 6:00PM

p2 528001881 1 default Sep 18 2005 6:00PM

p3 544001938 1 default Sep 18 2005 6:00PM

p4 544001947 1 default Sep 18 2005 6:00PM

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 71

p1: p2: p3:

Multiple disks under default segment

p4:

Figure 3-3

Page 103: New.features.guide.to.Sybase.ase.15

Partition_Conditions

--------------------

NULL

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- ----------------- -------------------

1 1 1 1.000000 1.000000

(return status = 0)

Note that if you do not include data placement clauses, the partitions

are created on the default segment.

When to Use Hash-based Partitioning

Hash-based partitioning can benefit queries that are searching for a

single row or a small number of rows in a very large table or for

tables in which the primary search arguments are high cardinality

data elements such as SSN, product SKUs, customer IDs, account

numbers, etc., that are accessed using equality (i.e., product_sku=

'N12345BA234') vs. range scans. If most table queries require full

table scans, hash partitioning may be the best partitioning strategy.

Hash partitioning is also a good choice for tables with many parti-

tions as the user does not need to know or define the dividing criteria

for each partition.

The main goal of hash-based partitioning is to address the issue

of data skew while maintaining the placement of deterministic data.

To define a table with hashed partitions in such a way as to avoid

data skew, the column(s) chosen for the partition key should define

the key to be as unique as possible. Keys that are not defined

uniquely will tend to bunch the data together, thereby nullifying the

benefits of the hashed partitions. The following examples of hashed

tables illustrate both good and bad hashed partitions.

Example 1: Bad hash partitioning

create table HashedTable (

store_id int not null,

item_id int not null,

date_sold datetime not null,

sales_person varchar(30))

partition by hash (date_sold)

(hash1 on seg1,

hash2 on seg2,

hash3 on seg3)

72 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 104: New.features.guide.to.Sybase.ase.15

go

insert into bad_hash_key values (1, 1, "10/27/2005", "JaneDoe")

go 1001

(1 row affected)

1001 xacts:

The data distribution is highly skewed because of the duplicate hash

partition key:

Table-name partition-name num_rows

------------------- ----------------- ---------------------------------

bad_hash_key hash1 1001 <= all rows in one partition

bad_hash_key hash2 0

bad_hash_key hash3 0

Because the insert statement used the date portion only and all the

date values happen to be the same, the resulting data placement has

hash partition skew. If you insert a large number of rows where the

partition key column has a high duplication ratio, hash partitioning

also results in partition skew. To avoid that skew, just use the full

value of the datetime portion. While it is not possible to list all 1001

rows of data with specific time stamp values, it is essential to note

that by just changing the time value to a more granular value, much

of the skew can be avoided. In the following example, an algorithm is

used to increment the initial datetime value by 500 milliseconds.

Example 2: Good hash partitioning

declare @increment int, declare @myinittime datetime

select @increment=1

select @myinittime="10/27/2005 00:00:000"

while (@increment <= 1001)

begin

select @myinittime = dateadd (ms, 500,@myinittime)

insert into bad_hash_key values (1,1,@myinittime, "John")

select @increment = @increment + 1

end

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 73

Page 105: New.features.guide.to.Sybase.ase.15

And the data distribution is much better:

Table-name partition-name num_rows

------------------------ -------------------------- -------------------

bad_hash_key hash1 340

bad_hash_key hash2 326

bad_hash_key hash3 335

The following list identifies some of the criteria that might lead

toward selecting hashing as the partitioning strategy for a table.

� If your table is large, with no particular ordering of values on any

key column that is commonly used in applications for ordering or

grouping the resulting data

� For load balancing of data across partitions

� To avoid data skew when you expect data that is likely to skew

because of the nature of the data. Recall the earlier example that

stored greeting card sales. If you use sale date and time as the

hash partition key, the data will be distributed across the parti-

tions irrespective of date and time it was sold.

� When you are likely to have a large table and you do not know

what columns to specify as the basis for partitioning. In this case,

you could use hash partitioning on a representative key of the

data within the table. It is easy to create “n” hash partitions as

opposed to specifying a range or list for each partition.

You should keep the following items in mind when choosing

hash-based partitioning.

� Most datatypes can be used as hashkey columns.

� Choose a key that isn’t skewed!

� Choosing a partition key with a high degree of duplicates or

where one value is far more frequent than others can lead to par-

tition skew. Going back to the greeting card sales example, if you

were to use only the date part of the greeting card sales and 90%

of the sales happen on one day, then even hash partitioning may

be skewed.

74 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 106: New.features.guide.to.Sybase.ase.15

List Partitioning

This is a partitioning strategy where lists of data are grouped together

to create a partition. For example, a sales territory may be composed

of several continents. In order to access information about a particu-

lar sales territory, list partitions allow for partition elimination of all

sales territories except for the one of interest. The following example

and the corresponding diagram illustrate list partitioning.

Example:

create table continents

( country_code integer not null,

country_name char (25) not null,

region_code varchar (30) not null,

comment varchar (152) not null

) on seg1

partition by list (region_code)

( region1 values ('The_Americas'),

region2 values ('Asia'),

region3 values ('Europe'),

region4 values ('Australia', 'Other'))

go

In the example, additional sales territories and states can be added.

However, for the values in the list to move from one partition to

another, the data will have to be reloaded into the table. If later a par-

tition called Africa has to be added and data from region4 is to be

moved from region4 to region5-Africa, then data from region4 has to

be bulk copied out, region4 truncated, and data reloaded. Then the

data will be automatically placed in the correct list partitions. As you

can see, list partitions pretty much inherit the characteristics of range

partitions except they are more specific in granularity. Because of

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 75

region1: region2: region3:

Seg1

region4:

Figure 3-4

Page 107: New.features.guide.to.Sybase.ase.15

higher granularity, the possibility of point queries resolving with par-

tition elimination is very high.

When to Use List Partitioning

List partitions are unique in that you decide where to place data based

on specific values of the partitioning key. Unlike the other methods

where data is grouped by a range of data or by the resulting hashed

key value, list partitions are dependent on the key values always

existing prior to being placed in the table. If a key value has not been

defined in the partition definition, ASE will raise an error. List parti-

tions can be useful for partitioning large tables on partition keys that

are based on summary data, such as fiscal quarter, where you need to

keep summary data for five years about sales by quarter. You could

define the partitions such that partition and data maintenance could

be performed only on the most current data. This will retain the abil-

ity to truncate old or unnecessary data using the new truncate

partition command.

Example:

The table ListPartition1 is partitioned by list. The table is partitioned

based on the business units and the associated states for each unit.

There can be as many as 250 values per list.

1> sp_help ListPartition1

2> go

Name Owner Object_type Create_date

-------------------------- ------------ ----------------------- ----------------------

ListPartition1 dbo user table Sep 30 2005 2:02PM

(1 row affected)

Column_name Type Length Prec Scale Nulls

Default_name Rule_name Access_Rule_name Computed_Column_object Identity

-------------------- ---------------------------- ----------- ---- ----- -----

------------ --------- ---------------- ---------------------- --------

state char 2 NULL NULL 0 NULL

NULL NULL NULL 0

name varchar 20 NULL NULL 0 NULL

NULL NULL NULL 0

76 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 108: New.features.guide.to.Sybase.ase.15

Object does not have any indexes.

No defined keys for this object.

name type partition_type partitions partition_keys

------------------- ---------------- -------------- ----------- ----------------------

ListPartition1 base table list 5 state

partition_name partition_id pages segment create_date

------------------- ------------ ----------- --------------------- ------------------

region1 1344004788 1 default Sep 30 2005 2:02PM

region2 1360004845 1 default Sep 30 2005 2:02PM

region3 1376004902 1 default Sep 30 2005 2:02PM

region4 1392004959 1 default Sep 30 2005 2:02PM

region5 1456005187 1 default Sep 30 2005 2:10PM

Partition_Conditions

-----------------------------------------------------------------------------------

VALUES ('NY','NJ','MD','VA')

VALUES ('FL','PA','DE','NC')

VALUES ('OH','DC','SC')

VALUES ('CA','other')

VALUES ('TX')

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

0 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 77

Page 109: New.features.guide.to.Sybase.ase.15

List partitioning also has the added benefit of acting as a check con-

straint mechanism. If an attempt is made to insert data into a list that

does not belong to that list, an error is generated and the row is not

added to the list as shown in the following command.

1> insert into ListPartition1 values('AP','Jane Doe')

2> go

Msg 9573, Level 16, State 1:

Server 'DMGTD02A2_SYB', Line 1:

The server failed to create or update a row in table 'ListPartition1'

because the values of the row's partition-key columns do not fit into

any of the table's partitions.

Command has been aborted.

(0 rows affected)

Round-robin Partitioning

This is the continuation partitioning scheme from the prior releases.

This is also the default partitioning scheme during the upgrade pro-

cess. In the absence of any partitioning defined on a table, this

partitioning is automatically applied with one partition. This parti-

tioning scheme enjoys most of the benefits of the new partitioning

schemes except the “partition elimination,” which is the cornerstone

for improved performance via an improved optimizer and smart

query processor. The partition elimination is not possible with this

method because this partitioning scheme does not include a partition-

ing key, which is critical to eliminate partitions. This is the only

partitioning scheme that does not require a separate license.

The main characteristics of round-robin partitioning are:

� Randomly distributed data among the partitions

� No help to the optimizer on queries since there were no seman-

tics associated with the distribution of data

� Reduces last page contention

� Supports parallel create index and parallel data access depending

on the data placement

Round-robin partitioned tables can have global or local indexes, and

are the only tables where a clustered index can be either local or

global. The data placement and partition balance in this scheme

depends on the existence of a clustered index. As you may notice in

78 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 110: New.features.guide.to.Sybase.ase.15

the following sequence of operations, if a clustered index exists on

the table, the data will be placed in the first partition.

First, remove all but the first partition:

alter table sales unpartition

go

Then, remove the data from the table. (This example assumes that

data has already been bcped out.)

truncate table sales

go

Next, repartition the table with four partitions:

alter table sales partition 4

go

Warning: Empty Table 'sales' with a clustered index has been

partitioned. All data insertions will go to the first partition. To

distribute the data to all the partitions, re-create the clustered index

after loading the data.

Next, the data is reloaded into the table:

$SYBASE/OCS-15/bin/bcp partition_test1..sales in RR_DATA.txt -c -t","

–r”\n”

If sp_helpartition is now executed against the table, the data is

reported to have been loaded starting with partition 1.

sp_helpartition sales

go

partitionid firstpage controlpage ptn_data_pages

----------- ----------- ----------- --------------

1 817 816 154 <= all data in

2 489 488 1 first partition

3 569 568 1

4 617 616 1

(4 rows affected)

Partitions Average Pages Maximum Pages Minimum Pages Ratio (Max/Avg)

----------- ------------- ------------- ------------- --------------------

4 39 154 1 3.948718

(1 row affected)

(return status = 0)

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 79

Page 111: New.features.guide.to.Sybase.ase.15

If the clustered index is dropped and recreated using the following

sequence of commands, the data will be spread across the available

partitions, as shown from the results of the sp_helpartition.

drop index sales.sales_cdx_1

go

create clustered index sales_cdx_1 on sales(sale_id)

go

Warning: Clustered index 'sales_cdx_1' has been created on the partitioned

table 'sales' with 4 partitions using the segment 1 with 1 devices. For

complete I/O parallelism, each partition should be on a separate device.

sp_helpartition sales

go

partitionid firstpage controlpage ptn_data_pages

----------- ----------- ----------- --------------

1 681 680 14 <= All partitions

2 665 664 32 have data

3 753 752 40

4 729 728 68

(4 rows affected)

Partitions Average Pages Maximum Pages Minimum Pages Ratio (Max/Avg)

----------- ------------- ------------- ------------- --------------------

4 38 68 14 1.789474

If a clustered index does not exist, then the data may be placed in any

partition. It is important to understand that the partitions may not be

balanced. The only way to balance the round-robin partitions is to

drop and recreate the clustered index. Even if your business require-

ments do not necessitate a clustered index on the table, a clustered

index on an appropriate column has to be created in order to

rebalance the partitions. You will notice that such an inconvenience

is avoided in the three new partitioning schemes.

An alternative approach to balancing the round-robin partitions is

to split the input data into equal input-sized input files and bulk load

them into specific partitions by specifying the partition number in the

bcp command.

80 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 112: New.features.guide.to.Sybase.ase.15

When to Use Round-robin Partitioning

The round-robin partitioning strategy is a partitioning method

whereby each row is placed on the next available partition’s page

chain. This method is also called striped partitioning. The data rows

are sequentially distributed among the defined number of partitions.

Round-robin partitioned tables do not have partition keys; therefore,

no partitioning criterion is used to place data pages on a particular

partition’s page chain.

Round-robin partitions are defined much like the original

(pre-ASE 15) partitions. With ASE 15, the round-robin partitioning

strategy is the default for any table that is not created with a specific

partitioning strategy. It is also the default strategy that is applied dur-

ing the ASE server upgrade from a prior version of ASE.

Round-robin partitioning allows for the distribution of data pages

across multiple disks. If only one disk is associated with a table, par-

titioning of the table provides the same benefits as prior releases of

ASE; it adds additional data page chains to address the “hot spot”

issues with mass data loads. Round-robin partitioning also allows for

parallel worker processes to be utilized for large table scans where

the optimizer determines that a large enough portion of the table must

be read in order to resolve the query.

Round-robin partitioning does not avoid data skew. In the case

where data is loaded using the bcp utility, the data is loaded on the

selected partition’s page chain and not across multiple page chains.

The following examples are methods for creating round-robin

partitioned tables.

Example 1:

create table NoPartitionTable

( ColumnA int,

ColumnB char(10)

)

Although no partition is specified, the partition strategy for the table

is defaulted to round-robin with one partition, as seen in the output

from sp_help table-name.

1> create table NoPartitionTable

2> ( ColumnA int,

3> ColumnB char(10)

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 81

Page 113: New.features.guide.to.Sybase.ase.15

4> )

5>

6> go

1> sp_help NoPartitionTable

2> go

Name Owner Object_type Create_date

---------------------------- ------------ ------------ -------------------

NoPartitionTable dbo user table Sep 18 2005 9:24PM

(1 row affected)

< some output deleted>

partition_name partition_id pagessegment create_date

--------------------------- ------------ ------------ -------------------

NoPartitionTable_592002109 592002109 1 default Sep 18 2005 9:24PM

Partition_Conditions

--------------------

NULL

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- ------------------- ------------------

1 1 1 1.000000

1.000000

Lock scheme Allpages

< some output deleted >

(return status = 0)

Example 2:

create table RoundRobinTableNoSegments

( ColumnA int,

ColumnB char(5)

)

partition by roundrobin 3

Example 3:

create table RoundRobinTableOneSegment

( ColumnA int,

ColumnB char(5)

)

partition by roundrobin 3 on (seg1)

82 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 114: New.features.guide.to.Sybase.ase.15

Example 4:

create table RoundRobinTableOnSegments

( ColumnA int,

ColumnB char(5)

)

partition by roundrobin 3 on (seg1, seg2, seg3)

In Examples 1, 2, and 3, the partitioning strategy is the same except

the placement strategies are different. The use of parallelism, the

cornerstone of achieving better performance, needs not only the

appropriate partitioning strategy but a good data placement strategy

as well. Parallel access to multiple disk devices or disk segments is

key to achieving higher degrees of parallelism.

Partitioning StrategiesAs presented earlier, there are four industry recognized partitioning

options that are available with ASE 15: range, list, hash, and

round-robin. The option chosen to implement partitions depends on

the data access requirements. The dimensions on which fact tables

are built lend themselves to data partitioning. By combining partition

segmentation with data and index segmentation, DSS applications

should be able to achieve acceptable performance levels. Each parti-

tioning strategy is implemented to allow the database administrator to

select the strategy that best fits the needs of the data, application, or

performance goals to support the strategic direction of the organiza-

tion. While each strategy has benefits that allow it to stand alone as

an implementation option, in some cases combinations of data and

index partitioning may be desirable. For any partitioning strategy to

be an effective method for I/O balancing, the DBMS has to provide

parallel processing options in conjunction with the host environment

supporting parallel access to data. In the case of ASE, parallelism is

available in both partitioned tables and table scans.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 83

Page 115: New.features.guide.to.Sybase.ase.15

Inserting, Updating, and Deleting Data inPartitions

Inserting Data into Semantic Partitions

Data is inserted into partitions according to the partition key. The row

assignment to a specific partition during an insert is the same whether

it is done in batch mode using bcp or via interactive mode. The data

insertion methods described here do not apply to round-robin parti-

tioning. Data insertion has differing effects based on the specific

partitioning strategy chosen. When multiple columns are used as a

composite partition key, the partition chosen to insert data also differs

with each partitioning type. There can be as many as 31 columns in

the partition key.

Inserting Data into Range Partitions

When a single partition key column is used, data is inserted into the

range partition where the inserted row’s partition key column fits

between the lower and upper bounds of a specific partition. However,

when a composite partition key is used with a range partition, the

server continues to resolve the composite partition key by evaluating

each key in order from the first key column to the last key column

until it finds the right partition for the data row. If it finds no match-

ing partition, the insert operation is rejected.

The algorithm that the server follows to find the matching range

partition with a composite range partition key is shown below.

If a range partition with composite key (c1, c2) is defined as:

partition by range (c1, c2)

(p1 values <= (v1, v2),

p2 values <= (v3, v4),

p3 values <= (v5, v6)

if c1 < v1, then the row is assigned to p1

if c1 = v1, then

if c2 < v2 or c2 = v2, then the row is assigned to p1

if c1 > v1 or (c1 = v1 and c2 > v2), then

84 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 116: New.features.guide.to.Sybase.ase.15

if c1 < v3, then the row is assigned to p2

if c1 = v3, then

if c2 < v4 or c2 = v4, then the row is assigned to p2

if c1 > v3 or (c1 = v3 and c2 > v4), then

if c1 < v5, then the row is assigned to p3

if c1 = v5, then

if c2 < v6 or c2 = v6, then the row is assigned to p3

if c2 > v6, then the row is not assigned

Example:

create table physician_list (

fname varchar(30) not null,

lname varchar(30) not null,

physician_id int not null,

specialty_id smallint not null,

specialty_name varchar(30),

other columns ...)

partition by range (physician_id, specialty_id)

(FirstPartition values <= (5000, 14) on segment1,

SecondPartition values <= (10000, 10) on segment2,

ThirdPartition values <= (50000, 25) on segment3)

Adaptive Server partitions the rows in this way:

� Rows with these partitioning key values are assigned to

FirstPartition: (4001, 12), (5000, 12), (5000, 14), (3000, 18)

� Rows with these partitioning key values are assigned to

SecondPartition: (6000, 18), (5000, 15), (5500, 22), (10000, 10),

(10000, 9)

� Rows with these partitioning key values are assigned to

ThirdPartition: (10000, 22), (80000, 24), (50000, 2), (50000, 16)

ASE tries to match the first key. If the first key is equal to the parti-

tion key upper bound, then it tries to evaluate the next key. If that key

resolves the partition identification, then the row is inserted. As you

can see, rows (5000, 12) and (5000, 14) were placed in FirstPartition,

but row (5000, 15) was placed in SecondPartition. When all the keys

in the composite partition key resolve to a particular partition, the

row is inserted in that partition as in row (5000, 14).

You may wonder why Sybase doesn’t consider all the keys rather

than evaluate them in order. The answer is that while partitioning was

implemented to help DSS and mixed workload environments, the

goal was to continue to focus on OLTP performance. If Sybase

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 85

Page 117: New.features.guide.to.Sybase.ase.15

evaluated all the keys each time, then the number of comparisons on

inserts and updates would be greater, causing degradation in OLTP

performance on mixed workload systems.

Inserting Data into Hash Partitions

In the hash partitioning scheme, the composite key in its entirety is

treated as a single data stream and hashed using Sybase’s internal

hashing scheme. The row is inserted based on the hash value in the

appropriate partition. The rule is the same whether the hash partition

key consists of one or more columns. That is why if you believe one

column in a table is likely to produce a hash partition skew, adding

more columns into the hashkey may result in a better data

distribution.

Inserting Data into List Partitions

As stated previously, list partitioning works like a check constraint.

Therefore, a row belongs to one and only one partition. If the key-

word other is included in any one of the partitions, then all the rows

that fail an assignment will go to that list. If the keyword other is not

used and the assignment fails, the row is not inserted.

Deleting Data from All Semantic Partitions

No special rules apply when deleting data from semantic partitions.

Please note that if you want to delete all the rows from a partition,

you can use the truncate partition syntax to get the maximum perfor-

mance and convenience.

Updating Data in All Semantic Partitions

Updating a partition key column results in an expensive deferred

update, where the original row is deleted from the current partition

and a new row is inserted into the partition where the new row

belongs.

86 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 118: New.features.guide.to.Sybase.ase.15

Built-in FunctionsSeveral built-in functions now support partitioning. Many built-in

functions have been deprecated and in their place new built-in func-

tions supporting partitioning have been introduced. Here are some of

the built-in functions and some general usage patterns.

� data_pages — Replaces data_pgs and ptn_data_pgs.

data_pages returns the number of pages used by a table, index,

or partition. The syntax is:

data_pages(dbid, object_id [, indid [, ptnid]])

For APL tables with clustered indexes, an indid of 0 reports data

pages, and an indid of 1 reports index pages.

Example:

select convert(varchar(20),o.name) object,

sum(data_pages(db_id(),o.id,0)) data_pages,

sum(data_pages(db_id(),o.id,i.indid)) index_pages

from sysobjects o, sysindexes i

where o.type='U' and o.id = i.id

group by o.id

go

object data_pages index_pages

------------------- ----------- -----------

sales 154 1

sales_history 3 3

NoPartitionTable 1 1

A return value of 0 indicates entered parameters are in error.

� reserved_pages — Just like data_pages, reserved_pages

replaces the function reserved_pgs. The function reserved_

pages returns the number of pages reserved for a table, index, or

partition. The syntax is:

reserved_pages(dbid, object_id [, indid [, ptnid]])

For All Page Locking (APL) tables with clustered indexes, an

indid of 0 reports data pages, and an indid of 1 reports index

pages. A return value of 0 indicates entered parameters are in

error.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 87

Page 119: New.features.guide.to.Sybase.ase.15

� row_count — The row_count function replaces rowcnt.

row_count estimates the number of rows in a table or partition.

The syntax is:

row_count(dbid, object_id [, ptnid])

As discussed previously, every table has at least one partition

irrespective of whether the table is partitioned. An easy way to

find out if a table is partitioned is to use the sp_help stored proce-

dure. sp_help output shows one partition with the partition name

similar to the table name concatenated with object_id. Also

notice the partition ID will be the same as object_id.

Another way to determine which tables are partitioned is to

query syspartitions and sysobjects:

1> select convert(varchar(30), o.name) from syspartitions s,

sysobjects o where o.id = s.partitionid and o.type = 'U'

2> go

--------------------

sales

NoPartitionTable

To get row_count for each partition in a table, use a join query

similar to this:

select convert(varchar(30),o.name) "Table-name",

convert(varchar(30),p.name) "partition-name",

row_count(db_id(), object_id(o.name), partitionid) num_rows

from sysobjects o, syspartitions p

where o.id=p.id

and p.indid = 0

and o.type='U'

and o.name = "bad_hash_key"

go

Table-name partition-name num_rows

------------------------ -------------------------- -------------

bad_hash_key hash1 340

bad_hash_key hash2 326

bad_hash_key hash3 335

(3 rows affected)

88 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 120: New.features.guide.to.Sybase.ase.15

To get row_count for an unpartitioned table, use a query similar

to:

select row_count(db_id(), object_id('NoPartitionTable'),

object_id('NoPartitionTable'))

The above query shows that for a unpartitioned table, the parti-

tion ID and object_id are same.

� partition_id() — A new system function that returns the partition

ID for a specified table and partition.

Data Partition Implementation and UpgradeStrategies

As stated earlier in the chapter, when you upgrade your server to

ASE 15, data partitioning is in effect for all tables. For preexisting

non-partitioned tables that are upgraded to ASE 15, the way the table

is considered and accessed for query resolution has not changed. It

will still be handled as if the table were a single-striped table. For

preexisting partitioned tables that are upgraded to ASE 15, the way

the table is considered and accessed for query resolution will change.

It will now be processed as a robin-robin partitioned table. Each data

row that is added to the table will be placed on the next available par-

tition data chain.

During the upgrade process from pre-15 releases, the partitioned

tables are unpartitioned. The ASE 15 release bulletin states:

Unpartitioning of user tables during upgrade

During upgrade, Adaptive Server unpartitions any partitioned

tables. You are required to repartition the table post-upgrade. The

unpartitioning occurs due to the requirement in ASE 15 for each

partition to have a different partitionid. The expensive operation

of changing the partitionid for each page during upgrade has

been avoided and hence the unpartition during upgrade.

With the new partitioning strategies available with ASE 15, there will

be many business reasons for considering one or more strategies. The

choice of which strategy to implement on a table will depend on the

use of the data, the quantity of data in the table, and the possible par-

titioning of the data based on the partitioning key.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 89

Page 121: New.features.guide.to.Sybase.ase.15

The first action to consider is which tables can benefit from parti-

tioning. Although partitioning may be considered for small and

medium-sized tables, the greater performance benefits will be gained

from partitioning large tables. For that reason, the discussions will

focus on the criteria for choosing large candidate tables. However,

the criteria discussed in this chapter will be applicable to any size

table.

The tables that you want to look for are:

� Tables that are frequently accessed using table scans. Each table

might be a candidate for round-robin or range partitioning.

� Tables that contain portions of data used to resolve a higher per-

centage of the queries. For example, 80% of your queries might

go against the current six months of data, 15% against the next

six months of data, and the remaining 5% against data that is one

to two years old. This type of table would be a candidate for

range partitions.

� Tables where data can be grouped, such as sales territory data.

The data might be grouped by states within a territory. In this

case, list partitioning would be a candidate partition method. The

data might be grouped by territory IDs. This data might be either

list or range partitioned based on your knowledge of the data.

� Tables where data queries would return a unique row or a very

small number of rows would best be handled by hash-based

partitioning.

As with any changes that are being considered for an existing table,

there should be a business rule or performance issue that identifies a

need for considering making structural changes to the database. Ran-

domly choosing a table for partitioning may not have the desired

benefits. Large tables that are infrequently accessed or for which the

read performance is not an issue should not be considered when you

first start implementing the new partitioning strategies.

If you want to be proactive, use the MDA tables (as detailed in

Chapter 14) to determine candidate tables based on data read volume

and the number of rows in the table. Once you have identified possi-

ble tables, search your database to determine which stored procedures

are using the tables. After you have identified the stored procedures,

query the MDA tables to determine the frequency with which the

stored procedures are executed. Along with the information about the

frequency of usage, average execution times should be identified and

90 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 122: New.features.guide.to.Sybase.ase.15

used as baseline data. You may choose to use the Query Plan Matrix

feature to record the baseline data. You are now ready to take further

steps to implement a data partitioning strategy.

At this point, the potential table has been identified. The next

step is to determine the appropriate partitioning strategy to utilize.

Before you can determine the appropriate partition strategy to

use, you have to identify the business goal that you are trying to

address. The goal will lead you to the partition strategy that is most

likely to address the issue(s). For example, if the goal is to reduce

contention on the end of the table and allow for parallel worker pro-

cesses to be used during large table scans, then the simple choice is

the round-robin partition. By identifying and understanding the goal

that you are trying to accomplish, the choice of which strategy to use

should be easily recognized. Review the discussion on each strategy

presented earlier to determine which strategy best meets the business

goal that you have identified. Keep in mind that partitioning strate-

gies cannot be combined (e.g., range and list). You can only define

the table as having a specific partitioning strategy.

At this point, the potential table has been identified and the parti-

tion strategy has been selected. If the partition strategy selected is

round-robin, you are ready to set up the table for partitioning. If the

partition strategy was not round-robin, then the next step is to deter-

mine the appropriate partition key to utilize.

If you have selected range, hash, or list partitioning strategies, the

proper partition key has to be identified. The same queries and stored

procedures that helped identify potential candidate tables can be used

in defining the column or columns that will offer the appropriate par-

tition key. The partition key should be composed of columns that are

consistently found in the where clause. One goal of the partition key

is to promote partition elimination during query resolution. A good

partition key can support this goal if the database administrator takes

the time to analyze the range of values in the columns being

considered.

Let’s examine the case where gender might be considered as a

part of a partition key. For most organizations, the classifications for

gender fall into two categories: male and female. In this case, gender

might not be a viable candidate column for a partition key as it would

only eliminate 50% of the table. However, for the FBI, there are 11

categories for gender. Potentially, in the worst case, 50% of the data

would be eliminated from the data to be processed. However, if 2%

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 91

Page 123: New.features.guide.to.Sybase.ase.15

of the population is in the category “male becoming female” and the

FBI is searching for felons in that category, 98% of the data has been

eliminated. The partitions would have to be sized accordingly to

account for the variance in the number of data rows per category.

This example shows how important knowledge of the data can be to

resolving queries and providing for partition elimination.

When considering columns as potential partition key compo-

nents, keep in mind that a column may participate in different keys

for different partition strategies. In some cases, a column could be

used for one or more partition strategies. In the case of the FBI data,

gender could be used to partition a table using either the range or list

partitioning strategy.

The syntax for creating an empty partitioned table is:

create table table-name

(column-name column-attributes)

[{partition by range (column_name[, column_name]...)

([partition_name] values <= ({constant | MAX}

[, {constant | MAX}] ...) [on segment_name]

[, [partition_name] values <= ({constant | MAX }

[, {constant | MAX}] ...) [on segment_name]]...)

| partition by hash (column_name[, column_name]...)

{(partition_name [on segment_name]

[, partition_name [on segment_name]]...)

| number_of_partitions

[on (segment_name[, segment_name] ...)]}

| partition by list (column_name

([partition_name] values (constant[, constant] ...)

[on segment_name]

[, [partition_name] values (constant[, constant] ...)

[on segment_name]] ...)

| partition by roundrobin

{(partition_name [on segment_name]

[, partition_name [on segment_name]]...)

| number_of_partitions [on (segment_name

[, segment_name]...)]}]

The syntax for creating a partitioned table from an existing table

using the select into option is:

select [all | distinct] column-list

into [database.[owner.[table-name]

92 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 124: New.features.guide.to.Sybase.ase.15

...

[{partition by range (column_name[, column_name]...)

([partition_name] values <= ({constant | MAX}

[, {constant | MAX}] ...) [on segment_name]

[, [partition_name] values <= ({constant | MAX}

[, {constant | MAX}] ...) [on segment_name]]...)

| partition by hash (column_name[, column_name]...)

{(partition_name [on segment_name]

[, partition_name [on segment_name]]...)

|number_of_partitions

[on (segment_name[, segment_name] ...)] }

| partition by list (column_name

([partition_name] values (constant[, constant] ...)

[on segment_name]

[, [partition_name] values (constant[, constant] ...)

[on segment_name]] ...)

| partition by roundrobin

{(partition_name [on segment_name]

[, partition_name [on segment_name]]...)

| number_of_partitions [on (segment_name

[, segment_name]...)]}]

from ...

Index PartitioningNew to ASE 15 is index partitioning. Similar to data partitioning, the

goal of index partitioning is to allow for segregation of index chains

based on the partitioning strategy and/or the partition key. There are

two types of partitioned indexes: local and global. The difference

between them is local indexes are local to the partition whereas

global indexes span the partitions. Global indexes are independent of

the partition definitions. Local indexes are linked to a specific data

partition.

Partitioned indexes inherit the table’s partition key’s partitioning

strategy. If a primary key constraint is defined on a table, the primary

key is partitioned according to the table’s partition, thereby becoming

a local index. However, there is a problem if a primary key constraint

is not the leading column in the partition key. The local index will

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 93

Page 125: New.features.guide.to.Sybase.ase.15

not enforce uniqueness. Consequently, if the table is partitioned on a

non-primary key column, the primary key is effectively useless as a

primary key constraint unless it is implemented instead as a unique

global index. The reason for this is evident when you consider the

following scenario:

1. A telephone company’s customer table contains several million

rows where the primary key is customer_id.

2. The table is list partitioned by region (i.e., NE, SE, GL, SC, NW,

SW).

3. Insert a row with customer_id=12345 and region='NE'.

4. Insert a row with customer_id=12345 and region='SE'.

What happens?

The problem is that for performance reasons, ASE will only traverse

the local index b-tree associated with the partition to which the row

belongs. Consequently, using a primary key constraint would allow

duplicate rows in the above example. A warning will be issued at the

time the partitioned table is created with the primary key constraint,

so the table owner is considered warned.

The alternative would be to have ASE search every index tree,

which would actually be slower than a global index — orders of

magnitude slower as the number of partitions got above five or so.

Think about it. If I have a global index that has seven levels and I

partition a table ten ways, each local index would have only three

levels. But to search each of the ten index trees to enforce unique-

ness, I would need to do 10 x 3 = 30 index scans, whereas with my

original global index I would only need to perform seven index

scans. This also impacts foreign keys. The point of this whole discus-

sion is that partitioning on a non-key column is doable as long as the

primary key, unique, and all foreign key indexes are implemented

using global indexes.

Local Index

As noted earlier in the chapter, a local index can be either clustered or

nonclustered. A local clustered index must be built on a table that is

equi-partitioned. What does this mean? For a local clustered index to

be effective, the data needs to be equally distributed across partitions.

This type of index is best utilized for large tables that need the data

94 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 126: New.features.guide.to.Sybase.ase.15

presorted (via the clustered index) and need to allow for parallel

query processing.

Since a local index is associated with a particular partition, the

index typically provides faster access to the data pages. This is espe-

cially true for nonclustered local prefixed indexes. This is a result of

partition elimination by the optimizer based on the requirement that,

at a minimum, the partition key must contain as its first column the

corresponding column of the partitioning index. For example, if a

table is partitioned on month, the prefixed index should have month

as its first component. It can have additional columns, but the first

column of the index needs to be the same as the first column of the

partition key.

When creating a non-partitioned clustered index on a partitioned

table, the default index type is a local index. The following examples

show the various indexes on partitioned tables.

Clustered Prefixed Index on Range Partitioned Table

create table GP_TableRange

( ColumnA int,

ColumnB char(5),

)

partition by range (ColumnA)

(prtn1 values <= (1000),

prtn2 values <= (2000),

prtn3 values <= (3000),

prtn4 values <= (4000))

go

create clustered index GPIndex on dbo.GP_TableRange (ColumnA)

go

sp_help GP_TableRange

go

***********************************************************

Clustered Non-Partitioned Prefixed Index

on Range Partitioned Table

***********************************************************

Name Owner Object_type Create_date

------------- ----- ----------- --------------------------

GP_TableRange dbo user table May 8 2005 6:30PM

(1 row affected)

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 95

Page 127: New.features.guide.to.Sybase.ase.15

Column_name Type Length Prec Scale Nulls Default_name Rule_name Access_Rule_name

Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ --------- ----------------

---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL NULL

NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL NULL

NULL 0

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page index_fillfactor

index_reservepagegap index_created

index_local

---------- ---------- ----------------- ----------------------- ----------------

-------------------- --------------------------

-----------

GPIndex ColumnA clustered 0 0

0 May 8 2005 6:30PM

Local Index

(1 row affected)

index_ptn_name index_ptn_seg

----------------- -------------

GPIndex_724194599 default

GPIndex_740194656 default

GPIndex_756194713 default

GPIndex_772194770 default

(4 rows affected)

No defined keys for this object.

name type partition_type partitions partition_keys

------------- ---------- -------------- ----------- --------------

GP_TableRange base table range 4 ColumnA

(1 row affected)

partition_name partition_id pages segment create_date

-------------- ------------ ----------- ------- --------------------------

prtn1 724194599 1 default May 8 2005 6:30PM

prtn2 740194656 1 default May 8 2005 6:30PM

prtn3 756194713 1 default May 8 2005 6:30PM

prtn4 772194770 1 default May 8 2005 6:30PM

96 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 128: New.features.guide.to.Sybase.ase.15

Partition_Conditions

--------------------

VALUES <= (1000)

VALUES <= (2000)

VALUES <= (3000)

VALUES <= (4000)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Clustered Non-Prefixed Index on Range Partitioned Table

create table GP_TableRange

( ColumnA int,

ColumnB char(5),

)

partition by range (ColumnA)

(prtn1 values <= (1000),

prtn2 values <= (2000),

prtn3 values <= (3000),

prtn4 values <= (4000))

go

create clustered index GPIndex on dbo.GP_TableRange (ColumnB)

go

sp_help GP_TableRange

go

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 97

Page 129: New.features.guide.to.Sybase.ase.15

*****************************************************

Clustered Non-Partitioned Non-Prefixed Index

on Range Partitioned Table

*****************************************************

Name Owner Object_type Create_date

------------- ----- ----------- --------------------------

GP_TableRange dbo user table May 8 2005 6:30PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name Access_Rule_name

Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ --------- ----------------

---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL NULL

NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL NULL

NULL 0

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page index_fillfactor

index_reservepagegap index_created

index_local

---------- ---------- ----------------- ----------------------- ----------------

-------------------- --------------------------

-----------

GPIndex ColumnB clustered 0 0

0 May 8 2005 6:30PM

Local Index

(1 row affected)

index_ptn_name index_ptn_seg

----------------- -------------

GPIndex_916195283 default

GPIndex_932195340 default

GPIndex_948195397 default

GPIndex_964195454 default

(4 rows affected)

No defined keys for this object.

name type partition_type partitions partition_keys

------------- ---------- -------------- ----------- --------------

GP_TableRange base table range 4 ColumnA

(1 row affected)

98 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 130: New.features.guide.to.Sybase.ase.15

partition_name partition_id pages segment create_date

-------------- ------------ ----------- ------- --------------------------

prtn1 916195283 1 default May 8 2005 6:30PM

prtn2 932195340 1 default May 8 2005 6:30PM

prtn3 948195397 1 default May 8 2005 6:30PM

prtn4 964195454 1 default May 8 2005 6:30PM

Partition_Conditions

--------------------

VALUES <= (1000)

VALUES <= (2000)

VALUES <= (3000)

VALUES <= (4000)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Clustered Prefixed Index on List Partitioned Table

create table GP_TableList

( ColumnA int,

ColumnB char(5),

)

partition by list (ColumnA)

(prtn1 values (1000),

prtn2 values (2000),

prtn3 values (3000),

prtn4 values (4000))

go

go

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 99

Page 131: New.features.guide.to.Sybase.ase.15

create clustered index GPIndex on dbo.GP_TableList (ColumnA)

go

sp_help GP_TableList

go

*****************************************************

Clustered Non-Partitioned Prefixed Index

on List Partitioned Table

*****************************************************

Name Owner Object_type Create_date

------------ ----- ----------- --------------------------

GP_TableList dbo user table May 8 2005 6:30PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name Access_Rule_name

Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ --------- ----------------

---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL NULL

NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL NULL

NULL 0

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page index_fillfactor

index_reservepagegap index_created

index_local

---------- ---------- ----------------- ----------------------- ----------------

-------------------- --------------------------

-----------

GPIndex ColumnA clustered 0 0

0 May 8 2005 6:30PM

Local Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_1108195967 default

GPIndex_1124196024 default

GPIndex_1140196081 default

GPIndex_1156196138 default

(4 rows affected)

No defined keys for this object.

100 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 132: New.features.guide.to.Sybase.ase.15

name type partition_type partitions partition_keys

------------ ---------- -------------- ----------- --------------

GP_TableList base table list 4 ColumnA

(1 row affected)

partition_name partition_id pages segment create_date

-------------- ------------ ----------- ------- --------------------------

prtn1 1108195967 1 default May 8 2005 6:30PM

prtn2 1124196024 1 default May 8 2005 6:30PM

prtn3 1140196081 1 default May 8 2005 6:30PM

prtn4 1156196138 1 default May 8 2005 6:30PM

Partition_Conditions

--------------------

VALUES (1000)

VALUES (2000)

VALUES (3000)

VALUES (4000)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Clustered Non-Prefixed Index on List Partitioned Table

create table GP_TableList

( ColumnA int,

ColumnB char(5),

)

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 101

Page 133: New.features.guide.to.Sybase.ase.15

partition by list (ColumnA)

(prtn1 values (1000),

prtn2 values (2000),

prtn3 values (3000),

prtn4 values (4000))

go

go

@code = create clustered index GPIndex on dbo.GP_TableList (ColumnB)

go

sp_help GP_TableList

go

*****************************************************

Clustered Non-Partitioned Non-Prefixed Index

on List Partitioned Table

*****************************************************

Name Owner Object_type Create_date

------------ ----- ----------- --------------------------

GP_TableList dbo user table May 8 2005 6:30PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name Access_Rule_name

Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ --------- ----------------

---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL NULL

NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL NULL

NULL 0

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page index_fillfactor

index_reservepagegap index_created

index_local

---------- ---------- ----------------- ----------------------- ----------------

-------------------- --------------------------

-----------

GPIndex ColumnB clustered 0 0

0 May 8 2005 6:30PM

Local Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_1300196651 default

102 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 134: New.features.guide.to.Sybase.ase.15

GPIndex_1316196708 default

GPIndex_1332196765 default

GPIndex_1348196822 default

(4 rows affected)

No defined keys for this object.

name type partition_type partitions partition_keys

------------ ---------- -------------- ----------- --------------

GP_TableList base table list 4 ColumnA

(1 row affected)

partition_name partition_id pages segment create_date

-------------- ------------ ----------- ------- --------------------------

prtn1 1300196651 1 default May 8 2005 6:30PM

prtn2 1316196708 1 default May 8 2005 6:30PM

prtn3 1332196765 1 default May 8 2005 6:30PM

prtn4 1348196822 1 default May 8 2005 6:30PM

Partition_Conditions

--------------------

VALUES (1000)

VALUES (2000)

VALUES (3000)

VALUES (4000)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 103

Page 135: New.features.guide.to.Sybase.ase.15

Clustered Prefixed Index on Round-robin Partitioned Table

create table GP_TableRR

( ColumnA int,

ColumnB char(5),

)

partition by roundrobin 3 on (seg1, seg2, seg3)

go

create nonclustered index GPIndex on dbo.GP_TableRR (ColumnA)

go

sp_help GP_TableRR

go

*****************************************************

Clustered Non-Partitioned Prefixed Index

on Round-robin Partitioned Table

*****************************************************

Warning: Clustered index 'GPIndex' has been created on the empty partitioned table

'GP_TableRR'. All insertions will go to the first partition. To distribute the data

to all the partitions, re-create the clustered index after loading the data.

Name Owner Object_type Create_date

---------- ----- ----------- --------------------------

GP_TableRR dbo user table May 8 2005 6:31PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name Access_Rule_name

Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ --------- ----------------

---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL NULL

NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL NULL

NULL 0

Object has the following indexes

104 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 136: New.features.guide.to.Sybase.ase.15

index_name index_keys index_description index_max_rows_per_page index_fillfactor

index_reservepagegap index_created

index_local

---------- ---------- ----------------- ----------------------- ----------------

-------------------- --------------------------

-----------

GPIndex ColumnA clustered 0 0

0 May 8 2005 6:31PM

Local Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_1460197221 default

GPIndex_1476197278 default

GPIndex_1492197335 default

(3 rows affected)

No defined keys for this object.

name type partition_type partitions partition_keys

---------- ---------- -------------- ----------- --------------

GP_TableRR base table roundrobin 3 NULL

(1 row affected)

partition_name partition_id pages segment create_date

--------------------- ------------ ----------- ------- --------------------------

GP_TableRR_1460197221 1460197221 1 default May 8 2005 6:31PM

GP_TableRR_1476197278 1476197278 1 default May 8 2005 6:31PM

GP_TableRR_1492197335 1492197335 1 default May 8 2005 6:31PM

Partition_Conditions

--------------------

NULL

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 105

Page 137: New.features.guide.to.Sybase.ase.15

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Clustered Non-Prefixed Index on Round-robin Partitioned Table

create table GP_TableRR

( ColumnA int,

ColumnB char(5),

)

partition by roundrobin 3 on (seg1, seg2, seg3)

go

create nonclustered index GPIndex on dbo.GP_TableRR (ColumnA)

go

sp_help GP_TableRR

go

*****************************************************

Clustered Non-Partitioned Non-Prefixed Index

on Round-robin Partitioned Table

*****************************************************

Warning: Clustered index 'GPIndex' has been created on the empty partitioned table

'GP_TableRR'. All insertions will go to the first partition. To distribute the data

to all the partitions, re-create the clustered index after loading the data.

Name Owner Object_type Create_date

---------- ----- ----------- --------------------------

GP_TableRR dbo user table May 8 2005 6:31PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name Access_Rule_name

Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ --------- ----------------

---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL NULL

NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL NULL

NULL 0

106 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 138: New.features.guide.to.Sybase.ase.15

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page index_fillfactor

index_reservepagegap index_created

index_local

---------- ---------- ----------------- ----------------------- ----------------

-------------------- --------------------------

-----------

GPIndex ColumnB clustered 0 0

0 May 8 2005 6:31PM

Local Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_1604197734 default

GPIndex_1620197791 default

GPIndex_1636197848 default

(3 rows affected)

No defined keys for this object.

name type partition_type partitions partition_keys

---------- ---------- -------------- ----------- --------------

GP_TableRR base table roundrobin 3 NULL

(1 row affected)

partition_name partition_id pages segment create_date

--------------------- ------------ ----------- ------- --------------------------

GP_TableRR_1604197734 1604197734 1 default May 8 2005 6:31PM

GP_TableRR_1620197791 1620197791 1 default May 8 2005 6:31PM

GP_TableRR_1636197848 1636197848 1 default May 8 2005 6:31PM

Partition_Conditions

--------------------

NULL

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 107

Page 139: New.features.guide.to.Sybase.ase.15

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Clustered Non-Prefixed Index on Hash Partitioned Table

create table GP_TableHash

( ColumnA int,

ColumnB char(5),

)

partition by hash (ColumnA) 3 on (seg1, seg2, seg3)

go

create nonclustered index GPIndex on dbo.GP_TableHash (ColumnA)

go

sp_help GP_TableHash

go

*****************************************************

Clustered Non-Partitioned Non-Prefixed Index

on Hash Partitioned Table

*****************************************************

Name Owner Object_type Create_date

------------ ----- ----------- --------------------------

GP_TableHash dbo user table May 8 2005 6:31PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name Access_Rule_name

Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ --------- ----------------

---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL NULL

NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL NULL

NULL 0

Object has the following indexes

108 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 140: New.features.guide.to.Sybase.ase.15

index_name index_keys index_description index_max_rows_per_page index_fillfactor

index_reservepagegap index_created

index_local

---------- ---------- ----------------- ----------------------- ----------------

-------------------- --------------------------

-----------

GPIndex ColumnB clustered 0 0

0 May 8 2005 6:31PM

Local Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_1924198874 default

GPIndex_1940198931 default

GPIndex_1956198988 default

(3 rows affected)

No defined keys for this object.

name type partition_type partitions partition_keys

------------ ---------- -------------- ----------- --------------

GP_TableHash base table hash 3 ColumnA

(1 row affected)

partition_name partition_id pages segment create_date

----------------------- ------------ ----------- ------- --------------------------

GP_TableHash_1924198874 1924198874 1 default May 8 2005 6:31PM

GP_TableHash_1940198931 1940198931 1 default May 8 2005 6:31PM

GP_TableHash_1956198988 1956198988 1 default May 8 2005 6:31PM

Partition_Conditions

--------------------

NULL

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 109

Page 141: New.features.guide.to.Sybase.ase.15

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Clustered Prefixed Index on Hash Partitioned Table

create table GP_TableHash

( ColumnA int,

ColumnB char(5),

)

partition by hash (ColumnA) 3 on (seg1, seg2, seg3)

go

create nonclustered index GPIndex on dbo.GP_TableHash (ColumnA)

go

sp_help GP_TableHash

go

*****************************************************

Clustered Non-Partitioned Prefixed Index

on Hash Partitioned Table

*****************************************************

Name Owner Object_type Create_date

------------ ----- ----------- --------------------------

GP_TableHash dbo user table May 8 2005 6:31PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name Access_Rule_name

Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ --------- ----------------

---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL NULL

NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL NULL

NULL 0

Object has the following indexes

110 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 142: New.features.guide.to.Sybase.ase.15

index_name index_keys index_description index_max_rows_per_page index_fillfactor

index_reservepagegap index_created

index_local

---------- ---------- ----------------- ----------------------- ----------------

-------------------- --------------------------

-----------

GPIndex ColumnA clustered 0 0

0 May 8 2005 6:31PM

Local Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_1764198304 default

GPIndex_1780198361 default

GPIndex_1796198418 default

(3 rows affected)

No defined keys for this object.

name type partition_type partitions partition_keys

------------ ---------- -------------- ----------- --------------

GP_TableHash base table hash 3 ColumnA

(1 row affected)

partition_name partition_id pages segment create_date

----------------------- ------------ ----------- ------- --------------------------

GP_TableHash_1764198304 1764198304 1 default May 8 2005 6:31PM

GP_TableHash_1780198361 1780198361 1 default May 8 2005 6:31PM

GP_TableHash_1796198418 1796198418 1 default May 8 2005 6:31PM

Partition_Conditions

--------------------

NULL

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 111

Page 143: New.features.guide.to.Sybase.ase.15

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

You should have noticed a common occurrence for each of the

examples. Although the index was created as clustered and no index

partitioning was specified, in all cases the index was created as a

local index. In most of the examples, the clustered index was also

segmented in the same fashion as the table partitioning was seg-

mented. You should have also noted that for the round-robin

partitioned table, a warning message was issued that suggests that the

data be loaded into the table prior to creating the clustered index.

This warning is issued because the data will only be loaded into the

first data partition if the index exists prior to any data being loaded

into the table. You may be asking yourself the following questions:

Why does a warning get issued for this? Why will the data go to

other partitions if the index is created after the data is loaded?

Shouldn’t the data move when the index is created? Since the indexes

are defined as local indexes, the data and index will share the same

partition. In the case of round-robin partitioning, when the clustered

index is created on an existing partitioned table, the data within each

partition will be sorted — not the entire table as a whole. This behav-

ior is consistent with data partitioning in previous versions of ASE.

We have seen that the default behavior of creating a clustered

index is to type the index as a local index and manage the index

pages with the data pages. However, it is often more desirable to con-

trol the index pages in their own partitions. For user-defined

partitioning of an index, ASE supports this functionality on range,

list, and hash partitioned tables.

The syntax for creating a partitioned local index is:

create index ...

local index partition_name on segment_name

Example:

create clustered index LocalPartitionedIndex

on TableName (ColumnA)

112 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 144: New.features.guide.to.Sybase.ase.15

local index

partition1 on seg1,

partition2 on seg2,

partition3 on seg3

go

sp_helpindex TableName

go

Object has the following indexes

index_name

index_keys index_description

index_max_rows_per_page index_fillfactor index_reservepagegap

index_created index_local

------------------------------------------------------------------------

------------ --------------------------------

------------------------------------ -----------------------

---------------- -------------------- --------------------------

--------------------------------------------

LocalPartitionedIndex

ColumnA clustered

0 0 0 Oct 28 2005 10:56AM Local

Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------------------------------- ----------------------------

LocalPartitionedIndex_860527068 default

partition1 seg1

partition2 seg2

partition3 seg3

(4 rows affected)

(return status = 0)

Global Index

A global index can be either clustered or nonclustered. A global

index can also be prefixed. Prefixed global indexes are available for

all data partitioning strategies. Non-prefixed global indexes are not

supported in ASE 15 as the industry deems this concept to not be

useful in real applications.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 113

Page 145: New.features.guide.to.Sybase.ase.15

For nonclustered non-partitioned global indexes, the table can be

partitioned with more than one partition. Since nonclustered non-par-

titioned global indexes go across all data pages, nonclustered

non-partitioned global indexes provide the backward compatibility of

previous versions of ASE. Examples of each type of data partitioning

with global nonclustered non-partitioned indexes are shown below.

Nonclustered partitioned global indexes are only available for

round-robin partitioned tables.

Global Nonclustered Prefixed Index on Range Partitioned Table

create table GP_TableRange

( ColumnA int,

ColumnB char(5),

)

partition by range (ColumnA)

(prtn1 values <= (1000),

prtn2 values <= (2000),

prtn3 values <= (3000),

prtn4 values <= (4000))

go

create nonclustered index GPIndex on dbo.GP_TableRange (ColumnA)

go

sp_help GP_TableRange

go

*****************************************************

Global Nonclustered Non-Partitioned Prefixed Index

on Range Partitioned Table

*****************************************************

Name Owner Object_type Create_date

------------- ----- ----------- --------------------------

GP_TableRange dbo user table May 8 2005 1:47PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name

Access_Rule_name Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ ---------

---------------- ---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL

NULL NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL

NULL NULL 0

114 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 146: New.features.guide.to.Sybase.ase.15

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page

index_fillfactor index_reservepagegap

index_created index_local

---------- ---------- ----------------- -----------------------

---------------- --------------------

-------------------------- ------------

GPIndex ColumnA nonclustered 0

0 0

May 8 2005 1:47PM Global Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_1934626904 default

(1 row affected)

No defined keys for this object.

name type partition_type partitions partition_keys

------------- ---------- -------------- ----------- --------------

GP_TableRange base table range 4 ColumnA

(1 row affected)

partition_name partition_id pages segment create_date

-------------- ------------ ----------- ------- --------------------------

prtn1 1854626619 1 default May 8 2005 1:47PM

prtn2 1870626676 1 default May 8 2005 1:47PM

prtn3 1886626733 1 default May 8 2005 1:47PM

prtn4 1902626790 1 default May 8 2005 1:47PM

Partition_Conditions

--------------------

VALUES <= (1000)

VALUES <= (2000)

VALUES <= (3000)

VALUES <= (4000)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 115

Page 147: New.features.guide.to.Sybase.ase.15

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Global Nonclustered Prefixed Index on List Partitioned Table

create table GP_TableList

( ColumnA int,

ColumnB char(5),

)

partition by list (ColumnA)

(prtn1 values (1000),

prtn2 values (2000),

prtn3 values (3000),

prtn4 values (4000))

go

go

create nonclustered index GPIndex on dbo.GP_TableList (ColumnA)

go

sp_help GP_TableList

go

*****************************************************

Global Nonclustered Non-Partitioned Prefixed Index

on List Partitioned Table

*****************************************************

Name Owner Object_type Create_date

------------ ----- ----------- --------------------------

GP_TableList dbo user table May 8 2005 1:47PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name

Access_Rule_name Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ ---------

---------------- ---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL

NULL NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL

NULL NULL 0

116 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 148: New.features.guide.to.Sybase.ase.15

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page

index_fillfactor index_reservepagegap

index_created index_local

---------- ---------- ----------------- -----------------------

---------------- --------------------

-------------------------- ------------

GPIndex ColumnA nonclustered 0

0 0

May 8 2005 1:47PM Global Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_2142627645 default

(1 row affected)

No defined keys for this object.

name type partition_type partitions partition_keys

------------ ---------- -------------- ----------- --------------

GP_TableList base table list 4 ColumnA

(1 row affected)

partition_name partition_id pages segment create_date

-------------- ------------ ----------- ------- --------------------------

prtn1 2062627360 1 default May 8 2005 1:47PM

prtn2 2078627417 1 default May 8 2005 1:47PM

prtn3 2094627474 1 default May 8 2005 1:47PM

prtn4 2110627531 1 default May 8 2005 1:47PM

Partition_Conditions

--------------------

VALUES (1000)

VALUES (2000)

VALUES (3000)

VALUES (4000)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 117

Page 149: New.features.guide.to.Sybase.ase.15

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Global Nonclustered Prefixed Index on Round-robin Partitioned

Table

create table GP_TableRR

( ColumnA int,

ColumnB char(5),

)

partition by roundrobin 3 on (seg1, seg2, seg3)

go

create nonclustered index GPIndex on dbo.GP_TableRR (ColumnA)

go

sp_help GP_TableRR

go

*****************************************************

Global Nonclustered Non-Partitioned Prefixed Index

on Round-robin Partitioned Table

*****************************************************

Name Owner Object_type Create_date

---------- ----- ----------- --------------------------

GP_TableRR dbo user table May 8 2005 1:47PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name

Access_Rule_name Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ ---------

---------------- ---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL

NULL NULL 0

118 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 150: New.features.guide.to.Sybase.ase.15

ColumnB char 5 NULL NULL 0 NULL NULL

NULL NULL 0

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page

index_fillfactor index_reservepagegap

index_created index_local

---------- ---------- ----------------- -----------------------

---------------- --------------------

-------------------------- ------------

GPIndex ColumnA nonclustered 0

0 0

May 8 2005 1:47PM Global Index

(1 row affected)

index_ptn_name index_ptn_seg

----------------- -------------

GPIndex_139144510 default

(1 row affected)

No defined keys for this object.

name type partition_type partitions partition_keys

---------- ---------- -------------- ----------- --------------

GP_TableRR base table roundrobin 3 NULL

(1 row affected)

partition_name partition_id pages segment create_date

-------------------- ------------ ----------- ------- --------------------------

GP_TableRR_91144339 91144339 1 seg1 May 8 2005 1:47PM

GP_TableRR_107144396 107144396 1 seg2 May 8 2005 1:47PM

GP_TableRR_123144453 123144453 1 seg3 May 8 2005 1:47PM

Partition_Conditions

--------------------

NULL

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 119

Page 151: New.features.guide.to.Sybase.ase.15

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Global Nonclustered Prefixed Index on Hash Partitioned Table

create table GP_TableHash

( ColumnA int,

ColumnB char(5),

)

partition by hash (ColumnA) 3 on (seg1, seg2, seg3)

go

create nonclustered index GPIndex on dbo.GP_TableHash (ColumnA)

go

sp_help GP_TableHash

go

*****************************************************

Global Nonclustered Non-Partitioned Prefixed Index

on Hash Partitioned Table

*****************************************************

Name Owner Object_type Create_date

------------ ----- ----------- --------------------------

GP_TableHash dbo user table May 8 2005 1:47PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name

Access_Rule_name Computed_Column_object Identity

----------- ---- ----------- ---- ----- ----- ------------ ---------

---------------- ---------------------- --------

ColumnA int 4 NULL NULL 0 NULL NULL

NULL NULL 0

ColumnB char 5 NULL NULL 0 NULL NULL

NULL NULL 0

Object has the following indexes

120 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 152: New.features.guide.to.Sybase.ase.15

index_name index_keys index_description index_max_rows_per_page

index_fillfactor index_reservepagegap

index_created index_local

---------- ---------- ----------------- -----------------------

---------------- --------------------

-------------------------- ------------

GPIndex ColumnA nonclustered 0

0 0

May 8 2005 1:47PM Global Index

(1 row affected)

index_ptn_name index_ptn_seg

----------------- -------------

GPIndex_315145137 default

(1 row affected)

No defined keys for this object.

name type partition_type partitions partition_keys

------------ ---------- -------------- ----------- --------------

GP_TableHash base table hash 3 ColumnA

(1 row affected)

partition_name partition_id pages segment create_date

---------------------- ------------ ---------- ------- -------------------------

GP_TableHash_251144909 251144909 1 seg1 May 8 2005 1:47PM

GP_TableHash_267144966 267144966 1 seg2 May 8 2005 1:47PM

GP_TableHash_283145023 283145023 1 seg3 May 8 2005 1:47PM

Partition_Conditions

--------------------

NULL

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables with allpages

lock scheme.

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

1 0 0 0 0

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 121

Page 153: New.features.guide.to.Sybase.ase.15

(1 row affected)

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

Query Processor and Partition SupportThe revised query processor (QP) is partition aware. It makes use of

the existence of partitions on the table and indexes. It continues to

make use of parallelism if possible. As discussed earlier, it is impor-

tant to have disk subsystem support for parallel access to gain the

best advantage of partitions.

The query processor takes advantage of the partitions with

respect to:

� Directed joins with partitioned data — Partition-wise joins for

the table that are partitioned similarly will result in significant

performance gain. The following illustration shows how a

directed join at partition level would work.

122 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

partitionY

values cust_id <=15000

placed on default segmentwith 4 disk devices

partitionX

values cust_id <=5000

placed on seg1 with 2 diskchunks

partitionZ

values cust_id <=25000

placed on seg3 with 3 diskchunks

partitionX

values cust_id <=5000

placed on seg1 with 4 diskchunks

partitionY

values cust_id <=15000

placed on seg2 with 2 diskchunks

partitionZ

values cust_id <=25000

placed on seg3 with 4 diskchunks

table:customer

table:items_sold

Query: select i.cust_id, item_id, <more columns>

from customer c, items_sold I where c.cust_id = i.cust_id and

c.cust_id <= 5000

Pa

rtiti

on

sjo

ine

dd

ire

ctly

Partition eliminated in join

Partition eliminated in join

Figure 3-5

Page 154: New.features.guide.to.Sybase.ase.15

� Partition elimination — This is the major gain in range and list

partitioning schemes. If you have, for example, 366 partitions,

one for each day of the year of processing, and the query quali-

fies to select just one day of the year, then QP will eliminate all

of the other 365 partitions and will be orders of magnitude faster.

The following illustration shows how it works.

If you want to select all the trades that occurred from May to

November, the query processing engine will automatically elimi-

nate the remaining partitions and will consider only the Month3

through Month11 partitions.

� Local indexes on partitions — Increased concurrency through

multiple index access points. The number of index levels per par-

tition will likely be less compared to having one large index.

� Partition-specific statistics — Data distribution statistics are now

tracked at partition level. This helps QP with better selection of

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 123

Month1 Month2 Month3 Month4 Month12

stock_trades table partitioned per month,with datetime column as partition key

....

EliminatedEliminated

EliminatedEliminated

Eliminated

Figure 3-6

QueryX

QueryZ

QueryX QueryY QueryZ

PartitionX PartitionY PartitionZ

Unpartitioned Index

Unpartitioned Table

Partitioned Index

Figure 3-7

Page 155: New.features.guide.to.Sybase.ase.15

appropriate plans. Similarly, statistics can be updated and main-

tained at partition level.

ASE 15 OptimizerThe newly revised Sybase ASE 15 optimizer is cognizant of the new

partitioning schemes. The optimizer can pick only those partitions

that would satisfy the query, thus eliminating all other partitions from

the access paths. This “partition elimination” is expected to increase

the performance not just to OLTP queries but also to DSS queries.

The optimizer can now consider one or more individual partitions

rather than the entire table or all partitions (as in pre-15 partitioning).

Please note the partition elimination does not apply to the default and

unlicensed partitioning scheme called round-robin partitioning,

which is equivalent to partitioning in pre-15. Sybase realized that in

order to improve the data access performance of very large tables, the

data has to be able to be partitioned in ways other than just defining

“n” partitions on the table. “n” partitions are good for inserting large

amounts of data into the table when all data is being added to the end

of the table; however, with the proliferation of decision support sys-

tems and management’s need to slice-and-dice data, Sybase needed

to extend partitioning with meaning or semantics into new areas.

Partition MaintenanceThis section highlights some of the maintenance functions and stored

procedures available to help manage partitioned tables and indexes.

For more details about the new stored procedures, see Chapter 2,

“System Maintenance Improvements.”

Altering Data Partitions

How many times have you been told that once the table is created,

the application will not require modifications to be made to the table?

As all DBAs know, it is unusual for a table to not change in some

fashion during the life of the application. The same is true for parti-

tioned tables. Altering a partitioned table will occur for some of the

following reasons:

124 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 156: New.features.guide.to.Sybase.ase.15

� The size of a table has grown to a point where performance is an

issue.

� The data in the table has grown and additional partitions need to

be defined.

� The initial partitioning strategy selected for the table was

incorrect.

� The table no longer needs to be partitioned.

� Additional ranges or list values have been added to the table.

� The data in a partition has aged past its usefulness and the parti-

tion needs to be dropped.

� The partition key needs a modification. This could involve drop-

ping a column, changing a datatype, adding a column, etc.

The normal sequence of events for altering a table should be:

� Drop indexes that exist on the table.

� Partition or repartition the table.

� Recreate the indexes.

The alter table command can be used to perform the maintenance on

the table. The options available for the command allow you to:

unpartition, change the number of partitions, add and drop partitions,

and alter the key column.

Unpartition a Table

alter table [database-name.[owner-name].]table-name unpartition

Warning: With the current implementation of the unpartition command,

only round-robin partitioned tables with no indexes can be unpartitioned.

In addition to using the alter statement, a table can be unpartitioned

by using bcp to export the data, dropping the table, recreating it with-

out partitions, using bcp to reload the data, and running sp_recompile

on the tables.

You can also use the select into command to copy the data into

an unpartitioned table, apply any indexes from the partitioned table

onto the unpartitioned version of the table, drop the partitioned table,

and run sp_recompile on the table as a method for unpartitioning a

table.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 125

Page 157: New.features.guide.to.Sybase.ase.15

Change the Number of Partitions

alter table [database-name.[owner-name].]table-name

partition number-of-partitions

This syntax is supported only for round-robin partitions. Even here

you need to drop the clustered indexes first, unpartition the table

using alter table, and repartition using this syntax. In the other types

of new partitions, changing the number of partitions is equivalent to

adding new partitions. Dropping partitions is not supported at this

time.

Add a Partition to a Table

alter table [database-name.[owner-name].]table-name

add partition

partition_name] values <= ({constant | MAX}

[, {constant | MAX}]...)

[on segment_name]

{[, [partition_name] values <= ({constant | MAX}

[, {constant | MAX}] ...)

[on segment_name]]...)

Only range and list partitioned tables can have partitions added to

them. In order to add a partition to a hash or round-robin partitioned

table, it will be necessary to recreate the table with the additional par-

tition(s). Data will need to be reloaded using select into or bcp.

There is a difference in the following two syntax examples for

altering a range partition. If you specify the keyword by range (<par-

tition key>) and if the key value is the next highest increment value of

the last partition, instead of adding a new partition, it will change the

previous partition to the value of the new range. To add new parti-

tions, use the syntax without by range (<partition key>).

create table RangeTable2 (sales_region int, sales float)

partition by range(sales_region)

(region1 values <= (100),

region2 values <= (200),

region3 values <= (300))

go

sp_helpartition RangeTable2

go

126 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 158: New.features.guide.to.Sybase.ase.15

name type partition_type partitions partition_keys

------------------------------- ------------- -------------- ----------- ------------

RangeTable2 base table range 3 sales_region

(1 row affected)

partition_name partition_id pages segment create_date

------------------------ ------------ -------- ------------------- -------------------

region1 864003078 1 default Sep 30 2005 9:37AM

region2 880003135 1 default Sep 30 2005 9:37AM

region3 896003192 1 default Sep 30 2005 9:37AM

Partition_Conditions

------------------------------------------------------------

VALUES <= (100)

VALUES <= (200)

VALUES <= (300)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- ------------------- ------------------

1 1 1 1.000000 1.000000

(return status = 0)

alter table RangeTable2 partition by range (sales_region)

(region1 values <= (400))

go

(0 rows affected)

sp_helpartition RangeTable2

go

name type partition_type partitions partition_keys

----------------------- ---------------- -------------- ----------- --------------

RangeTable2 base table range 1 sales_region

(1 row affected)

partition_name partition_id pages segment create_date

---------------------------- ------------ ----------- ------------ -------------------

region1 928003306 1 default Sep 30 2005 9:39AM

Partition_Conditions

------------------------------------------------------------

VALUES <= (400)

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 127

Page 159: New.features.guide.to.Sybase.ase.15

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- --------------------- ---------------------------

1 1 1 1.000000 1.000000

(return status = 0)

On the other hand, if you omit the clause partition by range(<partition

key> in the alter statement, it will add a new partition.

Example:

create table RangeTable2 (sales_region int, sales float)

partition by range(sales_region)

(region1 values <= (100),

region2 values <= (200),

region3 values <= (300))

go

alter table RangeTable2 add partition (region4 values <= (400))

go

sp_helpartition RangeTable2

go

name type partition_type partitions partition_keys

------------------------- ---------------- -------------- ----------- ----------------

RangeTable2 base table range 4 sales_region

(1 row affected)

partition_name partition_id pages segment create_date

---------------------- ------------ ----------- ------------------ -------------------

region1 1056003762 1 default Sep 30 2005 9:41AM

region2 1072003819 1 default Sep 30 2005 9:41AM

region3 1088003876 1 default Sep 30 2005 9:41AM

region4 1120003990 1 default Sep 30 2005 9:42AM

Partition_Conditions

------------------------------------------------------------

VALUES <= (100)

VALUES <= (200)

VALUES <= (300)

VALUES <= (400)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- ------------------------ ------------------------

1 1 1 1.000000 1.000000

(return status = 0)

128 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 160: New.features.guide.to.Sybase.ase.15

To alter a table in order to change the partitions, the database option

select into/bulkcopy/pllsort must be set to true. Be sure to make a full

database dump after altering partitions and also turn off the select

into option if you need to.

With regard to list partitions, when adding a new partition, the

clause partition by list (<partition key) is not allowed. An error mes-

sage will be generated.

Consider the example of the table ListPartition1, which has four

partitions as follows:

Partition name partition key Rows per partition

--------------------------- ---------------------- ------------------

ListPartition1 region1 1

ListPartition1 region2 21

ListPartition1 region3 21

ListPartition1 region4 21

If you try to add a new list partition with the following syntax, it will

not succeed. Recall that similar syntax to add a new range partition

would succeed, wiping out all range partitions and giving just one

partition that you added.

1> alter table ListPartition1

2> partition by list (state)

3> (region5 values ('TX'))

4> go

Msg 9573, Level 16, State 1:

Server 'DMGTD02A2_SYB', Line 1:

The server failed to create or update a row in table 'ListPartition1'

because the values of the row's partition-key columns do not fit into

any of the table's partitions.

The only way to add a new list partition is to use the add clause as

follows:

1> alter table ListPartition1

2> add partition

3> (region5 values ('TX'))

4> go

Now the partitions include the new partition that was just added.

Partition name partition key Rows per partition

-------------------------- ----------------------- ---------------------

ListPartition1 region1 1

ListPartition1 region2 21

ListPartition1 region3 21

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 129

Page 161: New.features.guide.to.Sybase.ase.15

ListPartition1 region4 21

ListPartition1 region5 0

Drop Partitions

alter table [database-name.[owner-name].]table-name

drop partition partition-name

Dropping one or more partitions is currently not supported in the new

semantic partitions. It is supported in round-robin partitions where

unpartition is the only choice for dropping all partitions. The only

way to achieve dropping one or more partitions from a table with

semantic partitioning is to create a new table with the required parti-

tions and migrate the data using select into or bcp.

Part of the reason why dropping partitions is not supported for

semantic partitions for the current release is due to the impact of

re-adding partitions and any subsequent data that was added between

the drop and re-add. Consider the following example:

1. A table is partitioned using a range partition with the ranges of

<= 100, 200, 300, 400, etc., respectively.

2. The DBA drops the first partition (<=100).

3. Users continue to insert data with values of 50, 99, 72, 34, etc. —

all of which are <= 100. These values would get added to the

former second partition (<=200) as it qualifies in evaluation.

4. The DBA attempts to re-add the first partition (<=100) due to

unarchiving historical data or other business driver.

5. Users continue to insert data values <= 100, which would now go

to the first partition again.

The problem, of course, is how do you retrieve the rows entered at

step 3? By partition rules, the execution engine at step 5 would only

look in the re-added first partition, effectively “masking” the rows

that were added that would have qualified for retrieval. This of

course refers to range partitioning (similar problems exist for hash

partitioning as dropping a partition implies altering the hash function

to eliminate the partition) and now data that used to go to that parti-

tion may be spread across several partitions. The obvious answer is

that during an add partition operation the server should scan the exist-

ing partitions to relocate the data. As a result, dropping and re-adding

partitions for a hash-partitioned table would have the same impact as

130 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 162: New.features.guide.to.Sybase.ase.15

repartitioning the table, while list partitions would be relatively unaf-

fected and range partitions would be somewhere in between.

Modifications to the Partition Key

There are a few restrictions that have to be taken into account when a

modification to the partition key is considered. You cannot drop a

column that is part of the partition key. Datatype modifications have

to be compatible between the old and new datatype. Certain datatype

conversions will cause the data to be rearranged or resorted.

Updating partition keys while other users are querying the table

can lead to unexpected results for those clients performing the reads.

This is caused when the rows are being moved to a new partition.

This risk increases if the table is defined with datarow locking. This

problem can be avoided by either not allowing updates to be per-

formed against key columns that are used in the partition key or by

using isolation level 3. In a future release of ASE 15, a new table

attribute will be provided that will disallow partition key updates.

The example below shows several of these types of changes and

the impact of the change.

create table AlterPartitionKeyColumn

( ColumnA int,

ColumnB char(5),

ColumnC varchar(5),

ColumnD smallint,

ColumnE bit,

ColumnF datetime,

ColumnG numeric(10,2),

ColumnH tinyint,

ColumnI smalldatetime

)

partition by range (ColumnA, ColumnB)

(value1 values <= (100, "aaaa") on seg1,

value2 values <= (200, "hhhh") on seg2,

value3 values <= (300, MAX) on seg3)

go

insert into AlterPartitionKeyColumn

values (10, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn

values (20, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn

values (30, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 131

Page 163: New.features.guide.to.Sybase.ase.15

insert into AlterPartitionKeyColumn

values (40, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn

values (50, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn

values (60, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn

values (70, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn

values (80, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn

values (90, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn

values (100, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn

values (110, 'bbbb', 'bbbb', 10,1,"5/5/2005",999.99, 1,"5/5/2005")

go

sp_helpartition AlterPartitionKeyColumn

go

print "******************************"

print "* Attempt to drop ColumnB *"

print "******************************"

go

alter table AlterPartitionKeyColumn drop ColumnBalter table AlterPartitionKeyColumn

drop ColumnB

go

print "*************************************************************"

print "* Attempt to change datatype of ColumnA to numeric (10,2) *"

print "*************************************************************"

go

alter table AlterPartitionKeyColumn modify ColumnA numeric(10,2)

go

print "***************************************************"

print "* Attempt to change datatype of ColumnB to int *"

print "***************************************************"

go

alter table AlterPartitionKeyColumn modify ColumnB int

go

Results:

name type partition_type partitions partition_keys

-------------------- ----------- -------------- ----------- ----------------

AlterPartitionKey Column base table range 3 ColumnA, ColumnB

(1 row affected)

132 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 164: New.features.guide.to.Sybase.ase.15

partition_name partition_id pages segment create_date

-------------- ------------ ----------- ------- --------------------------

value1 1856722636 1 seg1 May 9 2005 8:20PM

value2 1872722693 1 seg2 May 9 2005 8:20PM

value3 1888722750 1 seg3 May 9 2005 8:20PM

Partition_Conditions

-----------------------

VALUES <= (100, "aaaa")

VALUES <= (200, "hhhh")

VALUES <= (300, MAX)

Avg_pages Max_pages Min_pages Ratio(Max/Avg) Ratio(Min/Avg)

----------- ----------- ----------- -------------------- --------------------

1 1 1 1.000000 1.000000

(return status = 0)

******************************

* Attempt to drop ColumnB *

******************************

Msg 13938, Level 16, State 1:

Server 'DCDR04', Line 1:

ALTER TABLE 'AlterPartitionKeyColumn' failed. You cannot drop column 'ColumnB'

because it is part of the partition key for the table or index

'AlterPartitionKeyColumn'. Drop the index or unpartition the table first.

*************************************************************

* Attempt to change datatype of ColumnA to numeric (10,2) *

*************************************************************

(11 rows affected)

***************************************************

* Attempt to change datatype of ColumnB to int *

***************************************************

Msg 257, Level 16, State 1:

Server 'DCDR04', Line 1:

Implicit conversion from datatype 'VARCHAR' to 'INT' is not allowed. Use the

CONVERT function to run this query.

Msg 11050, Level 16, State 3:

Server 'DCDR04', Line 1:

Adaptive Server cannot process this ALTER TABLE statement due to one or more preceding

errors. If there are no preceding errors, please contact Sybase Technical Support.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 133

Page 165: New.features.guide.to.Sybase.ase.15

Partition InformationInformation about the partitioning on a table or index is available

through several new stored procedures, system table columns, and

system functions. A revised system stored procedure, sp_helpartition,

is available to retrieve information about the partitioning on a table.

This stored procedure is also called by sp_help when used to gather

information about a table. The procedure is also executed when the

stored procedure sp_helpindex is issued. The example below shows

the various methods that can be used to view partition information. In

addition, review the section called “Partition-level Utilities” in Chap-

ter 2 to find various other tools that may be used to find partition

information.

create table AlterPartitionKeyColumn

( ColumnA int,

ColumnB char(5),

ColumnC varchar(5),

ColumnD smallint,

ColumnE bit,

ColumnF datetime,

ColumnG numeric(10,2),

ColumnH tinyint,

ColumnI smalldatetime

)

partition by range (ColumnA, ColumnB)

(value1 values <= (100, "aaaa") on seg1,

value2 values <= (200, "hhhh") on seg2,

value3 values <= (300, MAX) on seg3 )

go

create nonclustered index GPIndex on dbo.AlterPartitionKeyColumn (ColumnA,

ColumnB)

go

insert into AlterPartitionKeyColumn values (10, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (20, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (30, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (40, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (50, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

134 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 166: New.features.guide.to.Sybase.ase.15

insert into AlterPartitionKeyColumn values (60, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (70, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (80, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (90, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (100, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (110, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (120, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (130, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (140, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (150, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (160, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (170, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (180, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (190, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (200, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (210, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (220, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (230, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (240, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (250, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (260, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (270, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (280, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 135

Page 167: New.features.guide.to.Sybase.ase.15

insert into AlterPartitionKeyColumn values (290, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

insert into AlterPartitionKeyColumn values (300, 'bbbb', 'bbbb',

10,1,"5/5/2005",999.99, 1,"5/5/2005")

go 10 -- <= Execute 10 times

print "************************************************************"

print "* CMD: sp_help AlterPartitionKeyColumn"

print "************************************************************"

go

sp_help AlterPartitionKeyColumn

go

print "************************************************************"

print "* CMD: sp_helpartition AlterPartitionKeyColumn"

print "************************************************************"

go

sp_helpartition AlterPartitionKeyColumn

go

print "************************************************************"

print "* CMD: sp_helpartition AlterPartitionKeyColumn, null, value1"

print "************************************************************"

go

sp_helpartition AlterPartitionKeyColumn, null, value1

go

print "************************************************************"

print "* CMD: sp_helpartition AlterPartitionKeyColumn, GPIndex"

print "************************************************************"

go

sp_helpartition AlterPartitionKeyColumn, GPIndex

go

print "************************************************************"

print "* CMD: sp_helpindex AlterPartitionKeyColumn, GPIndex"

print "************************************************************"

go

sp_helpindex AlterPartitionKeyColumn, GPIndex

go

Results:

************************************************************

* CMD: sp_help AlterPartitionKeyColumn

************************************************************

Name Owner Object_type Create_date

----------------------- ----- ----------- --------------------------

AlterPartitionKeyColumn dbo user table May 9 2005 10:47PM

(1 row affected)

136 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 168: New.features.guide.to.Sybase.ase.15

Column_name Type Length Prec Scale Nulls Default_name

Rule_name Access_Rule_name Computed_Column_object Identity

----------- ------------- ----------- ---- ----- ----- ------------

--------- ---------------- ---------------------- --------

ColumnA int 4 NULL NULL 0 NULL

NULL NULL NULL 0

ColumnB char 5 NULL NULL 0 NULL

NULL NULL NULL 0

ColumnC varchar 5 NULL NULL 0 NULL

NULL NULL NULL 0

ColumnD smallint 2 NULL NULL 0 NULL

NULL NULL NULL 0

ColumnE bit 1 NULL NULL 0 NULL

NULL NULL NULL 0

ColumnF datetime 8 NULL NULL 0 NULL

NULL NULL NULL 0

ColumnG numeric 6 10 2 0 NULL

NULL NULL NULL 0

ColumnH tinyint 1 NULL NULL 0 NULL

NULL NULL NULL 0

ColumnI smalldatetime 4 NULL NULL 0 NULL

NULL NULL NULL 0

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page

index_fillfactor index_reservepagegap

index_created index_local

---------- ----------------- ----------------- -----------------------

---------------- --------------------

-------------------------- -----------

GPIndex ColumnA, ColumnB nonclustered 0

0 0

May 9 2005 10:47PM Local Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_2109247538 default

(1 row affected)

No defined keys for this object.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 137

Page 169: New.features.guide.to.Sybase.ase.15

name type partition_type partitions

partition_keys

----------------------- ---------- -------------- -----------

----------------

AlterPartitionKeyColumn base table range 3

ColumnA, ColumnB

(1 row affected)

partition_name partition_id pages segment

create_date

-------------- ------------ ----------- -------

--------------------------

value1 2045247310 2 seg1

May 9 2005 10:47PM

value2 2061247367 3 seg2

May 9 2005 10:47PM

value3 2077247424 3 seg3

May 9 2005 10:47PM

Partition_Conditions

-----------------------

VALUES <= (100, "aaaa")

VALUES <= (200, "hhhh")

VALUES <= (300, MAX)

Avg_pages Max_pages Min_pages Ratio(Max/Avg)

Ratio(Min/Avg)

----------- ----------- ----------- --------------------

--------------------

2 3 2 1.500000

1.000000

Lock scheme Allpages

The attribute 'exp_row_size' is not applicable to tables with allpages

lock scheme.

The attribute 'concurrency_opt_threshold' is not applicable to tables

with allpages lock scheme.

exp_row_size reservepagegap fillfactor max_rows_per_page identity_gap

------------ -------------- ---------- ----------------- ------------

0 0 0 0 0

(1 row affected)

138 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 170: New.features.guide.to.Sybase.ase.15

concurrency_opt_threshold optimistic_index_lock dealloc_first_txtpg

------------------------- --------------------- -------------------

0 0 0

(return status = 0)

************************************************************

* CMD: sp_helpartition AlterPartitionKeyColumn

************************************************************

name type partition_type partitions

partition_keys

----------------------- ---------- -------------- -----------

----------------

AlterPartitionKeyColumn base table range 3

ColumnA, ColumnB

(1 row affected)

partition_name partition_id pages segment

create_date

-------------- ------------ ----------- -------

--------------------------

value1 2045247310 2 seg1

May 9 2005 10:47PM

value2 2061247367 3 seg2

May 9 2005 10:47PM

value3 2077247424 3 seg3

May 9 2005 10:47PM

Partition_Conditions -----------------------

VALUES <= (100, "aaaa")

VALUES <= (200, "hhhh")

VALUES <= (300, MAX)

Avg_pages Max_pages Min_pages Ratio(Max/Avg)

Ratio(Min/Avg)

----------- ----------- ----------- --------------------

--------------------

2 3 2 1.500000

1.000000

(return status = 0)

************************************************************

* CMD: sp_helpartition AlterPartitionKeyColumn, null, value1

************************************************************

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 139

Page 171: New.features.guide.to.Sybase.ase.15

name type partition_type partitions

partition_keys

----------------------- ---------- -------------- -----------

----------------

AlterPartitionKeyColumn base table range 3

ColumnA, ColumnB

(1 row affected)

partition_name partition_id pages segment

create_date

-------------- ------------ ----------- -------

--------------------------

value1 2045247310 2 seg1

May 9 2005 10:47PM

Partition_Conditions

-----------------------

VALUES <= (100, "aaaa")

(return status = 0)

************************************************************

* CMD: sp_helpartition AlterPartitionKeyColumn, GPIndex

************************************************************

name type partition_type partitions partition_keys

------- ------------ -------------- ----------- --------------

GPIndex global index roundrobin 1 NULL

(1 row affected)

partition_name partition_id pages segment

create_date

------------------ ------------ ----------- -------

--------------------------

GPIndex_2109247538 2109247538 5 default

May 9 2005 10:47PM

Partition_Conditions

--------------------

NULL

140 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 172: New.features.guide.to.Sybase.ase.15

Avg_pages Max_pages Min_pages Ratio(Max/Avg)

Ratio(Min/Avg)

----------- ----------- ----------- --------------------

--------------------

5 5 5 1.000000

1.000000

(return status = 0)

************************************************************

* CMD: sp_helpindex AlterPartitionKeyColumn, GPIndex

************************************************************

Object has the following indexes

index_name index_keys index_description index_max_rows_per_page

index_fillfactor index_reservepagegap

index_created index_local

---------- ----------------- ----------------- -----------------------

---------------- --------------------

-------------------------- -----------

GPIndex ColumnA, ColumnB nonclustered 0

0 0

May 9 2005 10:47PM Local Index

(1 row affected)

index_ptn_name index_ptn_seg

------------------ -------------

GPIndex_2109247538 default

(1 row affected)

(return status = 0)

Sybase has also provided new system functions and updated some

existing ones to allow the user to query information about the parti-

tions. These system functions include:

� data_pages — Returns information about the number of data

pages used by the partition.

Syntax:

data_pages (dbid, object_id [, indid [, ptnid]])

� reserved_pages — Returns information about the number of

reserved pages used by the partition.

Syntax:

reserved_pages (dbid, object_id [, indid [, ptnid]])

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 141

Page 173: New.features.guide.to.Sybase.ase.15

� used_pages — Returns information about the number of used

pages used by the partition. The used pages include pages used

by internal structures such as OAM pages.

Syntax:

used_pages (dbid, object_id [, indid [, ptnid]])

� row_count — Returns an estimated count of the number of rows

in the partition.

Syntax:

row_count (dbid, object_id, partition_id)

� partition_id — Returns the partition ID for the specified partition.

In order to get the partition ID of an index, the index name needs

to be supplied.

Syntax:

partition_id (table_name, partition_name [, index_name])

� partition_name — Returns the partition name based on the parti-

tion ID. This function requires that the index ID be included for

both the base table and for an index. In order to retrieve the parti-

tion name for a partition ID on a table, the index ID would be

specified as 0. If the database ID is specified, the function will

return the partition name of the partition on another database

within the current server.

Syntax:

partition_name (index-id, partition_id [, dbid])

142 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Page 174: New.features.guide.to.Sybase.ase.15

Influence of Partitioning on DBA Activities� Database maintenance operations at the partition level require

smaller maintenance windows. This increases data availability

for all applications.

� Time spent managing a partition is much less than managing the

entire table, thus reducing the amount of application downtime

required for database maintenance.

� Improved VLDB support, partition-level data processing, parti-

tion-level index management, partition elimination, partition-

level database management activities, and mixed workload

performance reduce the total cost of ownership.

� Partitions that are unavailable, perhaps due to disk errors, do not

impede the queries that only need to access the available

partitions.

� ASE 15 continues to process data even if one or more partitions

should become unavailable due to any hardware errors.

Influence of Partitioning on Long-time ArchivalWith the new partitioning schemes that are available with ASE 15,

an archival process can co-exist with the regular processing. For

example, if the table is partitioned per time range in years, current

year data may be used for regular production and the previous year’s

partitions are long-time archival.

Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support | 143

Page 175: New.features.guide.to.Sybase.ase.15

SummaryPartitioning of tables and indexes is considered one of the major fea-

tures of ASE 15. For the first time, Sybase has provided a feature that

allows the ASE server to manage large tables and large databases.

The ability to select a partitioning strategy allows the DBA the flexi-

bility to manage large databases and tables with the knowledge that

performance will be improved by partitioning a table or index. The

DBA will have to keep in mind that to fully utilize the benefits of

partitioned tables and indexes, parallelism will need to be activated

on the server.

144 | Chapter 3: Semantic Partitions and Very Large Database (VLDB) Support

Many maintenance-friendly functionalities lower the total cost of ownership

lowercost of

ownership

ASE 15 delivers

Higher availability

Partitioned maintenance

Reduced downtime

Supports OLTP/DSS/Mix workload

Improved performance via partitionelimination and directed joins

DBA friendly and operationschedule friendly

Figure 3-8

Page 176: New.features.guide.to.Sybase.ase.15

Chapter 4

Scrollable Cursors

In this chapter we start with background information on the scrollable

cursor. The chapter continues to explain how to declare scrollable

and sensitive cursors, and how to manipulate the scrollable cursor.

The chapter will provide the reader with information on the rules

associated with scrollable cursors, along with an overview of the new

global variables for scrollable cursors. The reader will be advised of

situations where the new cursor functionality will impact tempdb.

The chapter contrasts the different cursor worktable materializations

between sensitive and insensitive cursors, and demonstrates how a

sensitive cursor can behave in an insensitive manner. This chapter

concludes with a brief note on the future direction of Sybase ASE

cursors.

IntroductionAs part of ASE 15, Sybase introduces scrollable cursors. This new

functionality allows for bidirectional movement of the cursor’s posi-

tion. The scrollable cursor allows the cursor to be positioned directly

at an arbitrary row in the cursor’s result set. Further, with scrollable

cursors, the cursor position can revisit the same row more than one

time, with no limit. Prior to version 15, cursors were unidirectional

(forward-only) and each row in the cursor set could be fetched no

more than once.

To further enhance cursors, Sybase introduces cursor sensitivity.

As this chapter will explain, cursor sensitivity determines if data

changes independent of the cursor set are visible to the cursor. In

Chapter 4: Scrollable Cursors | 145

Page 177: New.features.guide.to.Sybase.ase.15

other words, when a cursor is declared as sensitive, data changes to

the cursor’s base table may be visible inside the cursor result set. This

chapter examines when the changes will be visible to the cursor. This

chapter also demonstrates how cursor sensitivity can be combined

with cursor scrollability.

Scrollable Cursor BackgroundScrollable cursors address a competitive issue. The integration of

scrollable cursors into Sybase ASE 15 allows for ASE to match the

relational database management products from other vendors. At

least two Sybase competitors presently utilize a version of scrollable

cursors within their database engines.

Prior to ASE 15, cursors were functionally limited to what is now

classified as nonscrollable insensitive cursor types. To simulate

scrollable cursors, client tools or applications were occasionally uti-

lized. Depending on the implementation of the simulated scrollable

cursor, trade-offs would exist between the additional functionality of

the client-side cursor and ASE. In other words, with a simulated

scrollable cursor, result sets held by the client tool may unintention-

ally operate outside the scope of a transaction or hold an unnecessary

set of locks. Additionally, when the data inside ASE is changed, the

simulated cursor would not be updated.

With the introduction of the scrollable cursor, Sybase addresses

development cost and application performance issues. The scrollable

cursor eliminates the need for the simulated scrollability for the

client-side cursor. In many cases, scrollable cursors will eliminate the

need to create user-defined temporary tables to bidirectionally step

through a result set. This enhancement may help to reduce develop-

ment costs by removing the need to write customized code to

simulate a scrollable cursor.

Cursor ScrollabilityAs stated in the introduction, a scrollable cursor allows bidirectional

movement of the cursor position through a result set. In addition, the

scrollable cursor allows for cursor positioning at specifically called

rows in the cursor set. In this section, we elaborate on how to control

cursor scrollability.

146 | Chapter 4: Scrollable Cursors

Page 178: New.features.guide.to.Sybase.ase.15

Note: For scrollable cursors in ASE 15, the only valid cursor specifica-

tion is “for read only.” If you declare a scrollable cursor with a

specification clause of “for update” in ASE 15, ASE will return a syntax

error. Sensitive cursors may be updated provided they are declared as

non-scrollable. Stay tuned to future releases of ASE, as cursor

scrollability and sensitivity features may be enhanced in future releases.

To facilitate the scrollable cursor, Sybase provides the following syn-

tax updates for scrollable and sensitive cursors:

declare cursor_name

[cursor_sensitivity]

[cursor_scrollability] cursor

for cursor_specification

Note: cursor_scrollability can be defined as scroll or no scroll. The

default is no scroll. cursor_sensitivity can be defined as insensitive or

semi_sensitive. The default for the cursor is insensitive. No support for

the concept of “sensitive” exists in ASE 15.

Examples:

declare CSR1 scroll cursor for

select order_date from orders

declare CSU1 scroll cursor for

select start_date, end_date

from payroll where emp_name = "Brian Taylor"

To fetch rows from, or “scroll” through, the scrollable cursor result

set, several new extensions to the fetch command are now available.

These extensions are listed in the table below.

Extension Explanation

first Fetch the first row from the cursor result set.

last Fetch the last row from the cursor result set.

next Fetch the next row from the cursor result set. This is the default fetch

performed when retrieving a row from a non-scrollable cursor. It is the only

extension to the fetch command allowed in a non-scrollable cursor.

prior Fetch the previous row in the cursor result set.

absolute n Fetch the nth row in the cursor result set from the beginning of the cursor

set.

relative n Fetch the nth row in the cursor result set from the current cursor position.

Chapter 4: Scrollable Cursors | 147

Page 179: New.features.guide.to.Sybase.ase.15

Following are examples of the fetch command.

Assume the following cursor structure:

declare CSI scroll cursor for

select start_date, end_date

from payroll

go

Fetch the first row in the cursor set:

fetch first CSI

Fetch the last row from the cursor set:

fetch last CSI

Fetch the prior row from the cursor set:

fetch prior CSI

Fetch the next row from the cursor set:

fetch next CSI

Fetch the 15th row in the cursor set:

fetch absolute 15 CSI

Fetch the 10th row forward from the current cursor position:

fetch relative 10 CSI

Cursor-related Global VariablesBefore we get too far into the nuts and bolts discussion on scrollable

cursors, let’s introduce the cursor-related global variables:

@@rowcount

Depending on the type of cursor declared, @@rowcount is affected

differently:

148 | Chapter 4: Scrollable Cursors

Page 180: New.features.guide.to.Sybase.ase.15

Cursor Type Effect on @@rowcount

Forward only cursor Increments by one each time a row is fetched from a cursor. This

value will continue to increment until the number of rows in the

cursor result set is equal to the rows fetched.

Scrollable cursor Can increment beyond the total count of rows in the cursor result

set. There is no maximum value for @@rowcount. Regardless of

the cursor direction specified in the fetch statement, @@rowcount

will increment by one with each successful fetch. @@rowcount

does not reflect the number of rows in the result set.

@@fetch_status

Return Value Description

0 Successful fetch statement.

–1 Failed fetch statement. Row requested is outside the result set,

such as requesting the relative 15th row where cursor set contains

only 10 rows.

–2 Reserved value.

@@cursor_rows

Return Value Description

–1 The cursor is declared as scrollable and semi-sensitive, but the

cursor’s worktable is not fully populated. The number of rows the

cursor worktable will contain is unknown.

0 No rows qualified on the last open cursor, the last open cursor is

closed or deallocated, or no cursors are open.

n The last row in the scrollable sensitive cursor is reached, and the

size of the worktable is known. @@cursor_rows will now reflect the

number of rows contained by the cursor (n).

Note: A fetch of the last row in the cursor set with the fetch last cur-

sor_name command will seed the @@cursor_rows global variable with

the number of rows in the cursor set.

Chapter 4: Scrollable Cursors | 149

Page 181: New.features.guide.to.Sybase.ase.15

Changes to the sp_cursorinfo System ProcedureThe sp_cursorinfo system procedure has been slightly modified to

display information regarding the cursor type. The following

sp_cursorinfo output and comments highlight the new information

displayed for cursors declared on ASE 15 servers:

Cursor name 'text_cursor' is declared at nesting level '0'.

The cursor is declared as NON-SCROLLABLE cursor. <--- cursor type

The cursor id is 2686977.

The cursor has been successfully opened 2 times.

The cursor was compiled at isolation level 1.

The cursor is currently scanning at a nonzero isolation level.

The cursor is positioned before the next row.

There have been 0 rows read, 0 rows updated and 0 rows deleted through this

cursor.

The cursor will remain open when a transaction is committed or rolled back.

The number of rows returned for each FETCH is 1.

The cursor is updatable.

This cursor is using 3316 bytes of memory.

There are 3 columns returned by this cursor.

The result columns are:

Name = 'ID', Table = 'brian_text', Type = INT, Length = 4 (updatable)

Name = 'name', Table = 'brian_text', Type = VARCHAR, Length = 10

(updatable)

Name = 'info', Table = 'brian_text', Type = TEXT, Length = 16 (updatable)

Showplan output for the cursor:

QUERY PLAN FOR STATEMENT 1 (at line 1). <--- new showplan output

1 operator(s) under root

The type of query is DECLARE CURSOR.

ROOT:EMIT Operator

|SCAN Operator

| FROM TABLE

| brian_text

| Table Scan.

| Forward scan.

| Positioning at start of table.

| Using I/O Size 2 Kbytes for data pages.

| With LRU Buffer Replacement Strategy for data pages.

Total estimated I/O cost for statement 1 (at line 1): 27.

*** Also note the I/O cost included on the last line.

150 | Chapter 4: Scrollable Cursors

Page 182: New.features.guide.to.Sybase.ase.15

Be Aware of Scrollable Cursor Rules!Each of the following statements is subject to rules that apply based

on the cursor position. The Sybase manuals outline the rules for the

new fetch statement extensions in great detail. For the purpose of pro-

voking thought on how to handle error checking and cursor

positioning programmatically, we present a simplification of the cur-

sor rules through scrollable cursor scenarios involving cursor

position:

� next — A fetch utilizing the next extension when the cursor is

already positioned in the last row of the cursor set will result in

@@sqlstatus = 2, @@fetch_status = –1, and no data returned

by the fetch. Cursor position will remain on the last row of the

cursor set.

� prior — A fetch utilizing the prior extension when the cursor is

already positioned at the first row of the cursor result set will

result in @@sqlstatus = 2, @@fetch_status = –1, and no data

returned by the fetch. Cursor position will remain on the first row

of the cursor set. Note: A subsequent fetch of the next cursor row

will fetch the first row of the cursor result set.

� absolute — A fetch utilizing the absolute extension that calls a

row that is greater than the rowcount in the cursor set will result

in @@sqlstatus = 2, @@fetch_status of –1, and no data

returned by the fetch.

Note: A subsequent fetch of the next cursor row will fail after an abso-

lute fetch outside the range of the cursor set. A subsequent cursor call

will need to reposition the cursor position within the range of the cursor’s

total row count.

Example scenario:

• Declare and open a cursor with 10 rows in the result set.

• Fetch the 15th row in the cursor set:

fetch absolute 15 CSR1

Cursor position will then be after the 10th row in the result set.

• Fetch the “prior” row in the cursor set:

fetch prior CSR1

Returns the 10th row in the cursor result set.

Chapter 4: Scrollable Cursors | 151

Page 183: New.features.guide.to.Sybase.ase.15

� relative — A fetch utilizing the relative extension that calls a row

greater than the rowcount in the cursor set will result in

@@sqlstatus = 2, @@fetch_status = –1, and no data returned

by the fetch. A subsequent cursor call will need to reposition the

cursor’s position within the range of the cursor’s total row count.

Example scenario:

• Declare and open a cursor with 10 rows in the result set.

• Fetch the relative 10th row in the cursor set while positioned

at the 5th row in the cursor result set:

fetch relative 10 CSR1

Cursor position will then be after the 10th row in the result set.

• Fetch the “prior” row in the cursor set:

fetch prior CSR1

Returns the 10th row in the cursor result set.

A particular characteristic of scrollable cursors should be noted. In

prior releases of ASE, subsequent fetches from a cursor set were

often performed within a while loop. The while loop would check for

@@sqlstatus of 2. When @@sqlstatus of 2 was obtained, the cursor

would typically be closed and deallocated. With scrollable cursors,

@@sqlstatus of 2 is a condition where the cursor position specified

falls outside the range of rows in the cursor result set. It is not manda-

tory to close the cursor when @@sqlstatus of 2 is obtained from

ASE when operating a scrollable cursor.

Cursor SensitivityCursor sensitivity refers to the effect independent row modifications

in the base table will have on the cursor result set. A sensitive cursor

states the cursor result set can be affected by independent changes to

the base table. In contrast, with an insensitive cursor, independent

data changes are not visible within the cursor’s result set. In order to

create a sensitive cursor, it is declared as semi_sensitive in ASE 15.

The basic syntax to declare a sensitive cursor is as follows:

declare cursor_name semi_sensitive cursor

152 | Chapter 4: Scrollable Cursors

Page 184: New.features.guide.to.Sybase.ase.15

Example:

declare Employee_CSR semi_sensitive cursor

for select from Employee

where hire_date >= "01/01/2005"

Rules:

� The default sensitivity for a cursor is insensitive.

� A sensitive cursor can be declared as a scrollable cursor.

� A sensitive cursor can only be declared as an updatable cursor in

ASE 15 as long as the cursor is not scrollable.

The sensitive cursor also has rules to govern the sensitivity in relation

to data changes. Data changes are visible to the cursor only if an

independent data modification:

� To the base table causes a row in the cursor result set to be

inserted or deleted.

� To the base table causes the value of a referenced column to

change.

� Forces a reorder of the rows in a cursor result set.

A further point with sensitive cursors must be explained and demon-

strated by example. When a cursor is declared as semi_sensitive and

no rows have been fetched by the cursor, the full cursor will continue

to be sensitive. In other words, every row in the cursor result set will

be sensitive to changes in the base table. As rows are fetched, the

fetched rows lose their sensitive property and become insensitive to

base table changes. The remaining rows not yet fetched through the

cursor remain sensitive to base table changes. To demonstrate this

concept of a sensitive cursor changing behavior from sensitive to

insensitive, consider the following example:

Note: The examples and text within this chapter are performed with the

default isolation level of 1. Modification of the isolation level may produce

different results.

Example table:

create table dbo.brian

(

a int NOT NULL,

b int NOT NULL,

c varchar(10) NOT NULL

Chapter 4: Scrollable Cursors | 153

Page 185: New.features.guide.to.Sybase.ase.15

)

lock datarows

go

Data set for example:

a b c

1 1 Brian

2 2 Carrie

3 3 Elliot

14 4 Jonah

5 5 Sammy

6 6 Wrigley

7 7 Nancy

8 8 Robert

9 9 Grant

10 10 Arthur

Demonstration 1: Update to a Row Already

Fetched

USER A

declare CIN semi_sensitive scroll cursor for

select a,c

from brian

where a < 5

go

open CIN

go

declare @a int,

@c varchar(10)

fetch first CIN

into @a,

@c

select @a, @c

go

154 | Chapter 4: Scrollable Cursors

Page 186: New.features.guide.to.Sybase.ase.15

Output:

A C

1 brian

USER B logs in to update the row fetched by the cursor:

update brian set a = 11

where a = 1

Confirm row is updated:

select * from brian where a = 11

Output:

a b c

11 1 brian

USER A, holder of the open cursor, fetches the updated row.

Changes to column a are not visible inside the cursor; this portion of

the cursor is now insensitive. The user runs the initial fetch again to

confirm the change:

declare @a int,

@c varchar(10)

fetch first CIN

into @a,

@c

select @a, @c

go

Output:

a c

1 brian

As previously stated, this example demonstrates the impact of fetch-

ing a row into the cursor’s worktable. Once a row is fetched into the

worktable, this fetched row is no longer sensitive to changes sup-

ported by the semi_sensitive declaration. Any of the rows that are not

yet fetched into the cursor’s worktable do remain sensitive. This is

showcased in the next example.

Chapter 4: Scrollable Cursors | 155

Page 187: New.features.guide.to.Sybase.ase.15

Demonstration 2: Update to a Row Not Yet

Fetched

Again, consider table brian above with the original data set.

USER A logs in, declares, opens, and fetches the first row from

cursor CIN:

USER A:

declare @a int,

@c varchar(10)

fetch first CIN

into @a,

@c

select @a, @c

go

A second user, USER B, logs in and updates a row’s column a value

to a value that qualifies for inclusion in cursor CIN.

First, the user verifies the row’s contents.

select * from brian where a = 14

go

Output:

a b c

14 4 jonah

Update the targeted row so that it qualifies as a row in the cursor

statement:

update brian

set a = 4 where a = 14

go

USER A runs fetch next for three iterations to reach the newly quali-

fied row:

fetch next CIN

into @a,

@c

select @a, @c

go

156 | Chapter 4: Scrollable Cursors

Page 188: New.features.guide.to.Sybase.ase.15

Output from three iterations:

Iteration 1:

a c

2 carrie

Iteration 2:

a c

3 elliot

Iteration 3:

a c

4 jonah

As the example demonstrates, the row in which column a was

updated from 14 to 4 is now visible to the cursor. This sensitivity is

possible since the row was not fetched into the cursor’s worktable

prior to the update and the cursor was declared as semi-sensitive.

Cursor Sensitivity — An Exception

As noted above, when rows are fetched by a sensitive cursor, the

fetched rows lose their sensitive property and become insensitive to

base table changes. Additionally, an insensitive cursor, by definition,

will not return data that has changed after the declaration of the cur-

sor. There is one exception: rows that utilize the text and image

datatypes. To understand this behavior, consider the scenarios offered

in the next two paragraphs.

� For sensitive and insensitive scrollable cursors, if a text or image

column is fetched, updated by another process, then fetched

again, the changes to the text or image column will be visible

within the cursor.

� For sensitive and insensitive non-scrollable cursors, if a text or

image column is updated after the cursor is declared and opened,

and before the row is fetched, the changes to the text or image

column will be visible within the cursor.

The underlying reason for this behavior extends from the nature of

how ASE physically stores text and image datatypes. With text and

image data, a pointer is maintained between the physical location of

the data and the first page in the page chain where the text or image

Chapter 4: Scrollable Cursors | 157

Page 189: New.features.guide.to.Sybase.ase.15

data is stored. The text or image page chain is not locked after the

rows are brought into the cursor’s worktable.

Note: For cursors containing both text and image along with other

datatypes, only the text or image portion of the cursor will exhibit this

exceptional behavior.

Locking Considerations with Cursors

With the four cursor sensitivity and scrollability combinations, the

tables underlying the cursor will be locked in different manners.

Exploration of cursors with sensitive and scrollable characteristics

yields different results with locking schemes as described below:

� Scrollable/Insensitive — Shared locks are held against the base

table(s) only for the duration of the worktable materialization.

After this point, the typical shared locks (remember a scrollable

cursor is read-only) held against the base table(s) are released.

� Scrollable/Sensitive — Shared locks are held at the datapage or

row level on all qualifying cursor rows as they are fetched. Once

the end of the cursor is obtained, all locks on the base table(s)

data are released, and subsequent locks will only persist against

the cursor’s worktable.

� Non-Scrollable/Sensitive — Locks are held at the datapage or

row level on all qualifying cursor rows as they are fetched. Once

the end of the cursor is obtained, all locks on the base table(s)

data are released. The locking behavior for this cursor type is

similar to that of pre-ASE 15 cursors, or the non-scrollable,

insensitive cursor type.

� Non-Scrollable/Insensitive — The only allowable cursor type for

pre-ASE 15 cursors. Locking behavior does not change between

ASE 12.5.x and ASE 15. Locks are held on the base table’s

datapages or rows while the cursor remains open and the cursor

position has not advanced beyond the last row in the cursor result

set. Once the end of the cursor is obtained, all locks on the base

table(s) data are released.

158 | Chapter 4: Scrollable Cursors

Page 190: New.features.guide.to.Sybase.ase.15

Impact on tempdb Usage

Sensitive and insensitive cursors have different impacts on tempdb.

When a cursor is declared as insensitive, the act of opening the cursor

will immediately generate a worktable in tempdb. In contrast, when a

cursor is declared as sensitive, the worktable is not immediately

materialized in tempdb. With a sensitive cursor, the rows are materi-

alized into tempdb as they are fetched from the cursor.

In order to demonstrate the effect on tempdb by sensitive cursors,

declare a cursor and perform the following steps:

declare large_cursor semi_sensitive scroll cursor for

select StartTime, EndTime

from Events

where eventID =0

go

open large_cursor

go

sp_helpdb tempdb

go

Output excerpt:

device_fragments free kbytes

tempdb_data1 2039938

tempdb_data2 2039986

tempdb_data3 2039986

fetch last large_cursor

go

sp_helpdb tempdb

go

Output excerpt:

device_fragments free kbytes

tempdb_data1 2039442

tempdb_data2 2040000

tempdb_data3 1946948

Note the decrease in free tempdb space after skipping to the last row

in the cursor large_cursor. A fetch of the last row of a scrollable sen-

sitive cursor materializes the worktable based on the return of all

rows, despite the need for only the last row. A similar impact exists if

we select the absolute 1,000th row from the cursor, for example. All

Chapter 4: Scrollable Cursors | 159

Page 191: New.features.guide.to.Sybase.ase.15

qualified cursor rows from 1 through 1,000 may load into the cursor’s

worktable in tempdb. For reasons discussed in the section called

“Sybase Engineer’s Insight” later in this chapter, the qualified rows

load into tempdb if the size of the result set exceeds the size threshold

of an internal memory structure designed to hold the cursor result set.

Worktable Materialization with Scrollable

Sensitive Cursors

The following cursor sequence demonstrates the materialization of

worktables in the cursor set, and the impact upon performance and

tempdb. The worktable built by the fetch command resides in tempdb

and grows in size as the cursor’s position advances through the cursor

result set:

Note: Once the cursor’s position reaches the last row in the scrollable

sensitive cursor, subsequent fetches of prior rows will not cause the

worktable to grow or be rebuilt.

declare large_cursor semi_sensitive scroll cursor for

select StartTime, EndTime

from Events

where eventID =0

/** large_cursor contains 3.9 million rows from a datapage locked table **/

The following sequence of fetch commands demonstrates the materi-

alization of a scrollable sensitive cursor into the worktable:

set statistics io on

go

-- skip forward 10 rows in cursor set:

fetch relative 10 large_cursor

go

statistics io output:

Table: Events scan count 1, logical reads: (regular=4 apf=0 total=4),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

-- skip forward 10 rows in cursor set:

fetch relative 10 large_cursor

go

160 | Chapter 4: Scrollable Cursors

Page 192: New.features.guide.to.Sybase.ase.15

Table: Events scan count 1, logical reads: (regular=4 apf=0 total=4),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

-- skip forward 100 rows in cursor set:

fetch relative 100 large_cursor

go

Table: Events scan count 1, logical reads: (regular=6 apf=0 total=6),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

-- skip forward 600 more rows in cursor set:

fetch relative 600 large_cursor

go

Table: Events scan count 1, logical reads: (regular=20 apf=0 total=20),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

-- skip forward 600 more rows in cursor set:

-- note the "worktable" is now scanned to access the current cursor row.

fetch relative 600 large_cursor

go

Table: Events scan count 1, logical reads: (regular=26 apf=0 total=26),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Worktable1 scan count 1, logical reads: (regular=536 apf=0

total=536), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

-- skip backward to the first cursor row, the worktable will be accessed

since it is now materialized:

fetch first large_cursor

go

Table: Events scan count 1, logical reads: (regular=26 apf=0 total=26),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Worktable1 scan count 1, logical reads: (regular=536 apf=0

total=536), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

-- skip to the last row in the cursor set, and the worktable is fully

materialized. All subsequent scans for cursor rows will now scan the

cursor’s worktable.

fetch last large_cursor

go

Chapter 4: Scrollable Cursors | 161

Page 193: New.features.guide.to.Sybase.ase.15

Table: Events scan count 1, logical reads: (regular=98238 apf=0

total=98238), physical reads: (regular=14098 apf=85440 total=99538), apf

IOs used=84108

Table: Worktable1 scan count 3, logical reads: (regular=14134117 apf=0

total=14134117), physical reads: (regular=1 apf=0 total=1), apf IOs

used=0

-- return to the first row in the cursor set, and the worktable is

scanned.

fetch first large_cursor

go

Table: Events scan count 1, logical reads: (regular=98238 apf=0

total=98238), physical reads: (regular=14098 apf=85440 total=99538), apf

IOs used=84108

Table: Worktable1 scan count 3, logical reads: (regular=14134117 apf=0

total=14134117), physical reads: (regular=1 apf=0 total=1), apf IOs

used=0

While this example is somewhat extreme (a cursor of 3.9 million

rows), it demonstrates how the scrollable sensitive cursor material-

izes the cursor’s worktable. When only a few rows of the cursor were

accessed, ASE did not create a worktable. Once ASE crosses an

internal threshold (see the section called “Sybase Engineer’s Insight”

later in this chapter) of cursor rows returned, a worktable is created.

Subsequent fetch statements are required to utilize the worktable to

access any new or previously fetched cursor rows. This worktable is

created in tempdb, and remains in tempdb until the cursor is closed.

Deallocation of the cursor is not necessary to release the space used

by the worktable.

A subsequent reopening of the cursor in the previous example,

followed by a fetch of the first row, reveals the worktable is not

scanned and the I/O count to retrieve the cursor row returns to a more

reasonable level:

fetch first large_cursor

go

Table: Events scan count 1, logical reads: (regular=4 apf=0 total=4),

physical reads: (regular=4 apf=0 total=4), apf IOs used=0

Now, let’s contrast the materialization of a scrollable insensitive cur-

sor with the above example of a scrollable sensitive cursor. Declare

and open the same cursor without the keyword semi_sensitive:

162 | Chapter 4: Scrollable Cursors

Page 194: New.features.guide.to.Sybase.ase.15

declare large_cursor scroll cursor for

select StartTime, EndTime

from Events

where eventID =0

open large_cursor

go

-- Worktable is materialized on the "open cursor" statement. Tempdb’s

available space is now reduced by the cursor’s fully materialized

size, before a single row is fetched from the cursor.

fetch first large_cursor

go

Table: Events scan count 1, logical reads: (regular=98238 apf=0

total=98238), physical reads: (regular=1719 apf=98101 total=99820), apf

IOs used=96515

Table: Worktable1 scan count 1, logical reads: (regular=14134111 apf=0

total=14134111), physical reads: (regular=1 apf=0 total=1), apf IOs

used=0

Conclusion of Sensitive vs. Insensitive Cursors

Note: The most important benefit of the sensitive cursor, in comparison

to the insensitive cursor, is the ability to return the first cursor row quickly.

A subsequent benefit is small cursors, those less than 16 K, can take

advantage of an internal 16 K memory structure and avoid tempdb usage

associated with tempdb worktable creation. See the following section

called “Sybase Engineer’s Insight” for details on the 16 K memory

structure.

In the above example, the sensitive cursor returned the first row from

the cursor set after only four I/Os on the initial fetch. Zero I/Os are

performed on the sensitive cursor when the cursor is opened. In con-

trast, upon opening the insensitive cursor, all potential cursor rows

are loaded into a worktable before a single row is fetched from the

cursor. In this example, 14 million I/Os were performed before a sin-

gle row could be fetched from the insensitive cursor.

Chapter 4: Scrollable Cursors | 163

Page 195: New.features.guide.to.Sybase.ase.15

Sybase Engineer’s InsightEarlier in this chapter references were made to an internal threshold.

In the context of cursors, this refers to an internal threshold that, once

crossed with a sensitive cursor, causes rows to begin to materialize

into the tempdb database. According to Sybase, the scrollable cursor

implementation for ASE 15 uses a 16 KB in-memory buffer to store

the result set. Only when the buffer is full is a worktable created. This

is an optimization whereby worktables (and thus tempdb usage) are

avoided for cursors with small result sets, or with semi-sensitive

cursors with total size greater than 16 K that only have the first rows

fetched.

This 16 KB in-memory buffer consumes space in the server’s

procedure cache. Each open cursor will individually hold open a

unique 16 KB buffer. If the usage profile for a server contains a great

deal of cursor usage, be certain to consider this additional overhead

when sizing the procedure cache on an ASE 15 server. If, for

instance, a server holds 100 concurrent cursors at a given time, this

consumes 1.6 MB (16 K * 100) space in the server’s procedure

cache. Keep in mind that the 16 KB buffer is maintained by the cur-

sor when the total size of fetched rows exceeds the 16 KB threshold.

In addition, the 16 KB buffer is allocated in whole, not in pieces.

Therefore, if only one 10-byte row is fetched from the cursor, the full

16 KB buffer is allocated and reserved for use by the individual cur-

sor. The size of 16 KB for this buffer is currently fixed, and is not

configurable.

SummaryIn order to ensure user satisfaction with the applications many of us

develop and support, it is crucial to deliver feedback to the end user

in the most timely fashion. We have all worked with users frustrated

at the appearance of the hourglass when their application begins to

look up a certain part, order, or customer.

The sensitive cursor provides the developer and DBA with an

additional tool in order to combat user frustration. As demonstrated

in this chapter, we displayed an example where a sensitive cursor

returned the first set of rows in four I/Os in comparison to the 14 mil-

lion I/Os associated with an otherwise identical insensitive cursor.

164 | Chapter 4: Scrollable Cursors

Page 196: New.features.guide.to.Sybase.ase.15

Your I/O savings will vary with sensitive cursors since most cursors

are probably not designed for the mass processing of data, but most

applications will benefit from cursor sensitivity.

As demonstrated, the scrollable cursor feature assists the devel-

opment effort by providing a mechanism to bidirectionally scroll

through result sets. The creation of special code to handle the manip-

ulation of a result set linked to objects such as “drop-down” lists will

no longer be necessary.

Future Direction

At the time of this book’s publication, the future direction of

scrollable cursors is not yet defined. The engineering team at Sybase

has not ruled on the feasibility of allowing scrollable and sensitive

cursors that can be declared for update, including fully sensitive

cursors.

The authors remain encouraged by Sybase’s commitment to

bring ASE into competitive balance with the introduction of

scrollable cursors. It should also be recognized that Sybase has done

an excellent job with the implementation of cursor sensitivity. This

enhancement clearly allows a very fast response time to return the

initial row from a cursor.

Chapter 4: Scrollable Cursors | 165

Page 197: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 198: New.features.guide.to.Sybase.ase.15

Chapter 5

Overview of Changes to theQuery Processing Engine

This chapter provides an overview of the changes to the query pro-

cessing engine. The chapter discusses optimization goals, and

outlines how to set them at the server, session, and query levels. It

briefly describes the optimization criteria. Next, the chapter discusses

the optimizer’s new “timeout limit” feature and shows how and why

to limit the time the optimizer may spend on a particular query plan.

Finally, the chapter provides examples of the types of queries that

have benefited from the changes made to the Query Optimizer.

The Graphical Plan Viewer and the new set options to display

the tree are discussed in Chapter 10.

IntroductionQueries used for OLTP (online transaction processing) and for DSS

(decision support system) have different performance requirements.

OLTP queries are relatively less complex and the desired response

time is anywhere from split seconds to a few minutes. DSS queries,

on the other hand, are more complex and their response time can be

from a few minutes to many hours. Because of their differences in

usage and the performance requirements, OLTP and DSS used to be

handled on different systems. Today, the increasing complexity of

business requirements and the pressure to lower the total cost of own-

ership (TCO) are requiring the data servers to move toward a mixed

workload environment where the OLTP system also performs

Chapter 5: Overview of Changes to the Query Processing Engine | 167

Page 199: New.features.guide.to.Sybase.ase.15

the duties of decision support system. These mixed workload envi-

ronments are also called operational decision support systems

(ODSSs).

Optimization GoalsThe new "optimization goal" configuration parameter allows the user

to choose the optimization strategy that best fits the query environ-

ment. The optimization goals are preset groupings of the optimization

criteria that in combination affect the behavior of the optimizer.

These goals direct the optimizer to use the features that will allow it

to find the most efficient query plan. The optimization goals are

shorthand for enabling logical “sets” of specific optimization criteria.

The optimization goals can be set at the server, session, or query

level. The server-level optimization goal is overridden at the session

level, which in turn is overridden at the query level.

The possible values for this parameter are allrows_oltp,

allrows_mix, and allrows_dss.

allrows_oltp

OLTP applications such as stock trading applications have

transactional queries of low or medium complexity. The allrows_oltp

option directs the query processing engine to use the features and

techniques that will generate the query plans most efficient for a

purely OLTP query. When this option is set, the optimizer optimizes

queries so that ASE uses a limited number of optimization criteria to

find a good query plan. Also, fewer sorting operations are performed.

This optimization goal is useful in applications that are performing

only transactional work and don’t need all the capabilities of ODSS.

Limiting the capabilities available to OLTP results in a fewer number

of query plans to be evaluated, which in turn improves the overall

query processing time.

The command to set the allrows_oltp optimization goal at the

server level is:

sp_configure "optimization goal", 0, "allrows_oltp"

168 | Chapter 5: Overview of Changes to the Query Processing Engine

Page 200: New.features.guide.to.Sybase.ase.15

The command to set the allrows_oltp goal at the session level is:

set plan optgoal allrows_oltp

And the command to set the allrows_oltp goal at the query level is:

select * from A

order by A.col1 plan "(use optgoal allrows_oltp)"

allrows_mix

When this option is set, the query processor optimizes queries so that

ASE uses all available optimization techniques, including the new

features and functionality, to find the best query plan. This is the

default strategy and is most useful in a mixed-query environment.

allrows_mix is basically allrows_oltp + merge joins + parallelism.

The command to set the allrows_mix optimization goal at the

server level is:

sp_configure "optimization goal", 0, "allrows_mix"

The command to set the allrows_mix goal at the session level is:

set plan optgoal allrows_mix

And the command to set the allrows_mix goal at the query level is:

select * from A

order by A.col1 plan "(use optgoal allrows_mix)"

Note: Optimization goals at the server level will have impact on all data-

bases, and may not be the intent of the DBA when making performance

tweaks to a single query.

allrows_dss

This option is useful in the DSS environment where the queries are of

medium to high complexity. This option is basically allrows_mix +

hash joins.

The command to set the allrows_dss optimization goal at the

server level is:

sp_configure "optimization goal", 0, "allrows_dss"

Chapter 5: Overview of Changes to the Query Processing Engine | 169

Page 201: New.features.guide.to.Sybase.ase.15

The command to set the allrows_dss goal at the session level is:

set plan optgoal allrows_dss

And the command to set the allrows_dss goal at the query level is:

Select * from A

order by A.col1 plan "(use optgoal allrows_dss)"

Tip: In environments where the server is used for OLTP during the day

and as a DSS in the evening, you can have a scheduled job that sets the

configuration to allrows_mix in the morning and changes it to allrows_dss

in the evening before the batch jobs kick in.

Determining the Current Optimization Goal

Since the optimization goals can be set at either server level or ses-

sion level, it may sometimes be difficult to keep track of the current

optimization goal. The global variable @@optgoal shows the optimi-

zation goal currently in force for that session. The command is:

1> select @@optgoal

2> go

------------------------------

allrows_dss

(1 row affected)

Optimization CriteriaOptimization criteria are specific algorithms or relational techniques

for the query plan. Each optimization goal has a default setting for

each optimization criterion. Even though these criteria can be set by a

DBA, Sybase does not recommend it at this point. Resetting optimi-

zation criteria may interfere with the default settings of the current

optimization goal, hurt the overall performance, and even produce an

error message. Please consult Sybase Technical Support if you see a

need to tune the optimization criteria.

Following are brief descriptions of some of the new optimization

criteria.

170 | Chapter 5: Overview of Changes to the Query Processing Engine

Page 202: New.features.guide.to.Sybase.ase.15

merge_join

The merge join algorithm relies on ordered input. merge_join is most

valuable when input is ordered on the merge key — for example,

from an index scan; it is less valuable if sort operators are required to

order input.

merge_union_all

merge_union_all maintains the ordering of the result rows from the

union input. merge_union_all is particularly valuable if the input is

ordered and a parent operator (such as merge join) benefits from that

ordering. Otherwise, merge_union_all may require sort operators that

reduce efficiency.

merge_union_distinct

merge_union_distinct is similar to merge_union_all, except that

duplicate rows are not retained. merge_union_distinct requires

ordered input and provides ordered output.

multi_table_store_ind

multi_table_store_ind determines whether the query processor may

use reformatting on the result of a multiple table join. Using

multi_table_store_ind may increase the use of worktables.

opportunistic_distinct_view

opportunistic_distinct_view determines whether the query processor

may use a more flexible algorithm when enforcing distinctness.

parallel_query

parallel_query determines whether the query processor may use par-

allel query optimization.

Chapter 5: Overview of Changes to the Query Processing Engine | 171

Page 203: New.features.guide.to.Sybase.ase.15

hash_join

hash_join determines whether the Adaptive Server query processor

may use the hash join algorithm. Join queries that involve joining

columns without indexes, or with expressions, can suffer perfor-

mance penalties when used with table scans and nested loop joins. In

this case, hash join performs the join much more efficiently than the

nested loop join. However, performance of hash joins can be greatly

affected by the data cache size available.

Optimization Timeout LimitQueries that involve many tables often require large optimization

times. ASE 15 has a built-in timeout algorithm. The optimization

process will time out when more optimization time will not yield a

significantly better plan. The timeout mechanism not only returns

good plans quickly, but it also helps save resource (procedure cache)

consumption.

The optimization timeout requires a fully optimized plan to be

found; therefore, it won’t timeout prior to a query plan being formed.

It uses the projected execution statistics from this plan to determine

the timeout length.

Prior to ASE 15, much of the query processing time for complex

queries was spent doing exhaustive searches. For example, one

12-way join (using set table count 12) took 10 minutes to optimize in

11.9.3 but only 30 seconds to execute. The optimal plan was found

very early in the search, resulting in over 9 minutes of wasted query

optimization time.

A user can also specify the amount of time ASE 15 spends opti-

mizing a query. This option can be set at the server level using the

command:

sp_configure "optimization timeout limit", 10

Note: Optimization timeout at the server level will have impact on all

databases, and may not be the intent of the DBA when making perfor-

mance tweaks to a single query.

172 | Chapter 5: Overview of Changes to the Query Processing Engine

Page 204: New.features.guide.to.Sybase.ase.15

Or at the session level by using the command:

set plan opttimeoutlimit 10

Or at the command level by using the command:

select * from A

order by A.col1 plan "(use opttimeoutlimit 10)"

The parameter 10 in the above examples is the amount of time ASE

can spend optimizing a query as a percentage of the total time spent

processing the query.

You can find out whether or not the optimizer is timing out for a

particular query by setting the option show brief. The command is:

set option show brief

The output will show the “optimizer timed out” message. Following

is a part of the output from the above set option for a select query that

was joining nine tables. The optimization timeout limit was set to

10%.

...

...

------------------------------------------

Search Engine Statistics (Summary)

------------------------------------------

Total number of tree shapes considered:9

Number of major tree shapes generated:2

Number of tree shapes generated by flipping major tree shapes:7

Number of valid complete plans evaluated:5

Total number of complete plans evaluated:70

------------------------------------------

!! Optimizer has timed out in this Opt block !!

...

...

...

The best global plan (Pop tree):

FINAL PLAN ( total cost = 8915.451 ):

Chapter 5: Overview of Changes to the Query Processing Engine | 173

Page 205: New.features.guide.to.Sybase.ase.15

When the optimization timeout limit was increased to 90%, it pro-

duced the following message:

------------------------------------------

Search Engine Statistics (Summary)

------------------------------------------

Total number of tree shapes considered:55

Number of major tree shapes generated:47

Number of tree shapes generated by flipping major tree shapes:8

Number of valid complete plans evaluated:6

Total number of complete plans evaluated:129

------------------------------------------

The best plan found in OptBlock1 :

...

...

...

The best global plan (Pop tree) :

FINAL PLAN ( total cost = 8915.451 ):

Notice that in the second scenario, there is no “optimizer timed out”

message. Also, the number of complete plans evaluated is much

higher in the second scenario — 129 compared to 70. In both cases,

the optimizer chose the same final plan with the same total cost even

though it did much less work in the scenario where the optimization

timeout was set to 10%. This example illustrates the benefits of the

new timeout feature.

Query Processor ImprovementsPerformance of index-based data access has been improved. In the

past, the optimizer could not use the index if the join columns were

of different datatypes. With ASE 15 there are no more issues sur-

rounding mismatched datatypes and index usage. More than one

index per table can be used to execute a query. This feature increases

the performance of queries containing ors and star joins.

New optimization techniques now try to avoid creating workta-

bles in the query scenario. Worktables were created in the tempdb to

perform various tasks including sorting. The creation of worktables

slows performance since they are typically resource intensive. ASE

15’s new hashing technique performs sorting and grouping in

174 | Chapter 5: Overview of Changes to the Query Processing Engine

Page 206: New.features.guide.to.Sybase.ase.15

memory, thus avoiding the necessity of a worktable. It is the buffer

memory and not the procedure cache that is used for this operation.

The elimination of the worktables has improved the performance of

the queries containing order by and group by statements.

The new hashing technique also improves the performance of the

aggregate function, which is regularly used in the summary reports.

Some of the enhanced optimizer features are targeted to improve

the performance of a VLDB (very large database) environment.

Improvements in the performance of queries involving range parti-

tions, parallel processing, and fact tables are particularly helpful in

the VLDB environments.

ASE 15 has enhanced the parallelism to handle large data sets. It

now handles both horizontal and vertical parallelism. Vertical paral-

lelism provides the ability to use multiple CPUs at the same time to

run one or more operations of a single query. Horizontal parallelism

allows the query to access different data located on different parti-

tions or disk devices at the same time.

Following are a few query scenario examples that have benefited

from the new optimizer.

Datatype Mismatch

If there is a join between two tables where the joining columns do not

match, then the query processor resolves the datatype mismatch inter-

nally and generates a plan with index scans. This will only happen if

the compatibility is allowed by the datatype conversion rules.

In the following example, the query is able to use an index even

though the join is on a mismatched column. The datatype of the col-

umn l_orderkey is double, whereas the datatype of o_totalprice is

integer.

select avg(o_totalprice), max(l_linenumber) from orders_mismatch,

lineitem where

orders_mismatch.o_totalprice = lineitem.l_orderkey and

orders_mismatch.o_totalprice >= 500.00

go

QUERY PLAN FOR STATEMENT 1 (at line 1).

6 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

Chapter 5: Overview of Changes to the Query Processing Engine | 175

Page 207: New.features.guide.to.Sybase.ase.15

|RESTRICT Operator

|

| |SCALAR AGGREGATE Operator

| | Evaluate Ungrouped MAXIMUM AGGREGATE

| | Evaluate Ungrouped COUNT AGGREGATE

| | Evaluate Ungrouped SUM OR AVERAGE AGGREGATE

| |

| | |NESTED LOOP JOIN Operator (Join Type: Inner Join)

| | |

| | | |SCAN Operator

| | | | FROM TABLE

| | | | orders_mismatch

| | | | Index : order_x

| | | | Forward Scan.

| | | | Positioning by key.

| | | | Index contains all needed columns. Base table will not be

read.

| | | | Keys are:

| | | | o_totalprice ASC

| | | | Using I/O Size 2 Kbytes for index leaf pages.

| | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | |

| | | |RESTRICT Operator

| | | |

| | | | |SCAN Operator

| | | | | FROM TABLE

| | | | | lineitem

| | | | | Index : lineitem_x

| | | | | Forward Scan.

| | | | | Positioning by key.

| | | | | Keys are:

| | | | | l_orderkey ASC

| | | | | Using I/O Size 16 Kbytes for index leaf pages.

| | | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | | | | Using I/O Size 16 Kbytes for data pages.

| | | | | With LRU Buffer Replacement Strategy for data pages.

Following is the graphical representation of the above query plan

using the Graphical Plan Viewer (GPV), a feature introduced in ASE

15. The GPV is discussed in detail in Chapter 10.

176 | Chapter 5: Overview of Changes to the Query Processing Engine

Page 208: New.features.guide.to.Sybase.ase.15

Partition Elimination and Directed Joins

Partition elimination as a technique applies to all semantic partitioned

tables — range, list, and hash. When there is a join between two

range partitioned tables, the query processor evaluates the possibility

of eliminating the non-matching partition scans from the join. This

elimination of additional scans not only improves the performance of

the queries on the partitioned tables, but also reduces the resource

consumption. In most joins with partitions, the result is NxM. For

example, if a table partitioned four ways is joined with another four-

way partitioned table (same partitioning scheme especially), the par-

allel Query Optimizer uses partition elimination to decide that only

N+M worker processes are needed, which is 8 instead of 16 in this

case.

The tables used in the following example, orders_range_parti-

tioned and customer_range_partitioned, have been range partitioned

Chapter 5: Overview of Changes to the Query Processing Engine | 177

Figure 5-1

Page 209: New.features.guide.to.Sybase.ase.15

into the following three partitions: P1 values <=500, P2 values <=

1000, and P3 values <= 1500, along the custkey. No indexes exist on

either table. As illustrated by the query plan, the query processor

eliminated partitions 1 and 3 from the data to be scanned.

select avg(o_totalprice), avg(c_acctbal)

from orders_range_partitioned, customer_range_partitioned

where

o_custkey = c_custkey and

c_custkey = o_custkey and

o_custkey between 600 and 1000

go

QUERY PLAN FOR STATEMENT 1 (at line 1).

7 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|RESTRICT Operator

|

| |SCALAR AGGREGATE Operator

| | Evaluate Ungrouped COUNT AGGREGATE.

| | Evaluate Ungrouped SUM OR AVERAGE AGGREGATE.

| | Evaluate Ungrouped SUM OR AVERAGE AGGREGATE.

| |

| | |MERGE JOIN Operator (Join Type: Inner Join)

| | | Using Worktable3 for internal storage.

| | | Key Count: 1

| | | Key Ordering: ASC

| | |

| | | |SORT Operator

| | | | Using Worktable1 for internal storage.

| | | |

| | | | |SCAN Operator

| | | | | FROM TABLE

| | | | | orders_range_partitioned

| | | | | [ Eliminated Partitions : 1 3 ]

| | | | | Table Scan.

| | | | | Forward Scan.

| | | | | Positioning at start of table.

| | | | | Using I/O Size 16 Kbytes for data pages.

| | | | | With LRU Buffer Replacement Strategy for data pages.

| | |

| | | |SORT Operator

178 | Chapter 5: Overview of Changes to the Query Processing Engine

Page 210: New.features.guide.to.Sybase.ase.15

| | | | Using Worktable2 for internal storage.

| | | |

| | | | |SCAN Operator

| | | | | FROM TABLE

| | | | | customer_range_partitioned

| | | | | [ Eliminated Partitions : 1 3 ]

| | | | | Table Scan.

| | | | | Forward Scan.

| | | | | Positioning at start of table.

| | | | | Using I/O Size 16 Kbytes for data pages.

| | | | | With LRU Buffer Replacement Strategy for data pages.

Tables with Highly Skewed Histogram Values

For joins involving tables with highly skewed data distributions, e.g.,

highly duplicated values, the optimizer uses a join technique with his-

tograms to accurately estimate the result set rows. This will generate

a better join execution plan.

In the following example, with the default optimization goal, the

optimizer selects a plan involving hash joins. This plan masks the

effect of join histograms. But when the optimization goal allrows_oltp

is used, it suppresses the hash joins and improves performance.

set plan optgoal allrows_oltp

go

set option show_histograms normal

go

select avg(l_linenumber) from lineitem_frequency, orders_frequency

where orders_frequency.o_orderkey = lineitem_frequency.l_orderkey

and lineitem_frequency.l_orderkey = 101

go

Creating Initial Statistics for table lineitem_frequency

.....Done creating Initial Statistics for table lineitem_frequency

Creating Initial Statistics for table orders_frequency

.....Done creating Initial Statistics for table orders_frequency

Creating Initial Statistics for index lineitem_xf

.....Done creating Initial Statistics for index lineitem_xf

Chapter 5: Overview of Changes to the Query Processing Engine | 179

Page 211: New.features.guide.to.Sybase.ase.15

Creating Initial Statistics for index order_xf

.....Done creating Initial Statistics for index order_xf

Start merging statistics for table lineitem_frequency

.....Done merging statistics for table lineitem_frequency

Start merging statistics for table orders_frequency

.....Done merging statistics for table orders_frequency

Start merging statistics for index lineitem_xf

.....Done merging statistics for index lineitem_xf

Start merging statistics for index order_xf

.....Done merging statistics for index order_xf

QUERY PLAN FOR STATEMENT 1 (at line 2).

6 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|RESTRICT Operator

|

| |SCALAR AGGREGATE Operator

| | Evaluate Ungrouped COUNT AGGREGATE

| | Evaluate Ungrouped SUM OR AVERAGE AGGREGATE

| |

| | |NESTED LOOP JOIN Operator (Join Type: Inner Join)

| | |

| | | |SCAN Operator

| | | | FROM TABLE

| | | | lineitem_frequency

| | | | Index : lineitem_xf

| | | | Forward Scan.

| | | | Positioning by key.

| | | | Keys are:

| | | | l_orderkey ASC

| | | | Using I/O Size 16 Kbytes for index leaf pages.

| | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | | | Using I/O Size 16 Kbytes for data pages.

| | | | With LRU Buffer Replacement Strategy for data pages.

| | |

| | | |RESTRICT Operator

| | | |

| | | | |SCAN Operator

| | | | | FROM TABLE

| | | | | orders_frequency

180 | Chapter 5: Overview of Changes to the Query Processing Engine

Page 212: New.features.guide.to.Sybase.ase.15

| | | | | Index : order_xf

| | | | | Forward Scan.

| | | | | Positioning by key.

| | | | | Index contains all needed columns. Base table will

not be read.

| | | | | Keys are:

| | | | | o_orderkey ASC

| | | | | Using I/O Size 16 Kbytes for index leaf pages.

| | | | | With LRU Buffer Replacement Strategy for index leaf

pages.

Group By and Order By

For queries involving group by and order by clauses on the same col-

umn(s), the query processor generates plans with on-the-fly

sorting/grouping techniques that avoid worktable creation.

In the following example, a worktable is not created because of

the new optimization techniques. There is an index on lineitem

(l_suppkey, l_partkey).

select l_suppkey, sum(l_partkey) from lineitem

group by lineitem.l_suppkey

order by lineitem.l_suppkey

go

QUERY PLAN FOR STATEMENT 1 (at line 5).

3 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|GROUP SORTED Operator

| Evaluate Grouped SUM OR AVERAGE AGGREGATE.

|

| |SCAN Operator

| | FROM TABLE

| | lineitem

| | Index : lineitem_spqd

| | Forward Scan.

| | Positioning at index start.

| | Index contains all needed columns. Base table will not be read.

| | Using I/O Size 16 Kbytes for index leaf pages.

| | With LRU Buffer Replacement Strategy for index leaf pages.

Chapter 5: Overview of Changes to the Query Processing Engine | 181

Page 213: New.features.guide.to.Sybase.ase.15

or Queries

For queries involving ors, the optimizer can now select a new sort

avoidance/duplicate elimination technique called hash union distinct.

This plan avoids creating worktables. In the following example,

indexes are lineitem (l_orderkey) and lineitem (l_suppkey).

select l_orderkey, l_suppkey from lineitem

where l_orderkey between 1000 and 1500 or

l_suppkey <= 100

go

QUERY PLAN FOR STATEMENT 1 (at line 5).

2 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|RESTRICT Operator

|

| |SCAN Operator

| | FROM TABLE

| | lineitem

| | Index : lineitem_x

| | Forward Scan.

| | Positioning at index start.

| | Using I/O Size 16 Kbytes for index leaf pages.

| | With LRU Buffer Replacement Strategy for index leaf pages.

| | Using I/O Size 16 Kbytes for data pages.

| | With LRU Buffer Replacement Strategy for data pages.

Star Queries

Star queries, which are very common in ODSS applications, are joins

between a large fact table and small dimension tables. ASE 15 can

quickly recognize star joins and generate efficient plans. The key

characteristic of an efficient plan for a star query is that the fact table

is accessed at the very end of the join order via the composite index,

and the dimension tables are joined in the order of the fact table’s

composite index. The statistics on the tables have to be up to date and

accurate for the optimizer to recognize the fact tables and dimension

tables.

In the following example, lineitem_fact is the fact table with a

composite index on (l_orderkey, l_partkey, l_suppkey). The plan

182 | Chapter 5: Overview of Changes to the Query Processing Engine

Page 214: New.features.guide.to.Sybase.ase.15

shows that the joins are between fact and dimension tables only and

there are no joins between dimensions.

select avg(l_extendedprice) from lineitem_fact,

orders_dim, part_dim, supplier_dim

where

lineitem_fact.l_partkey = part_dim.p_partkey and

lineitem_fact.l_orderkey = orders_dim.o_orderkey and

lineitem_fact.l_suppkey = supplier_dim.s_suppkey

and p_partkey < 8

and o_orderkey < 5

and s_suppkey < 3

The type of query is SELECT.

ROOT:EMIT Operator

|RESTRICT Operator

|

| |SCALAR AGGREGATE Operator

| | Evaluate Ungrouped COUNT AGGREGATE

| | Evaluate Ungrouped SUM OR AVERAGE AGGREGATE

| |

| | |N-ARY NESTED LOOP JOIN Operator has 4 children.

| | |

| | | |SCAN Operator

| | | | FROM TABLE

| | | | orders_dim

| | | | Index : order_xd

| | | | Forward Scan.

| | | | Positioning by key.

| | | | Index contains all needed columns. Base table will not

be read.

| | | | Keys are:

| | | | o_orderkey ASC

| | | | Using I/O Size 2 Kbytes for index leaf pages.

| | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | |

| | | |SCAN Operator

| | | | FROM TABLE

| | | | part_dim

| | | | Index : part_xd

| | | | Forward Scan.

| | | | Positioning by key.

| | | | Index contains all needed columns. Base table will not

be read.

| | | | Keys are:

Chapter 5: Overview of Changes to the Query Processing Engine | 183

Page 215: New.features.guide.to.Sybase.ase.15

| | | | p_partkey ASC

| | | | Using I/O Size 2 Kbytes for index leaf pages.

| | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | |

| | | |SCAN Operator

| | | | FROM TABLE

| | | | supplier_dim

| | | | Index : supplier_xd

| | | | Forward Scan.

| | | | Positioning by key.

| | | | Index contains all needed columns. Base table will not

be read.

| | | | Keys are:

| | | | s_suppkey ASC

| | | | Using I/O Size 2 Kbytes for index leaf pages.

| | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | |

| | | |RESTRICT Operator

| | | |

| | | | |SCAN Operator

| | | | | FROM TABLE

| | | | | lineitem_fact

| | | | | Index : lineitem_starcomp

| | | | | Forward Scan.

| | | | | Positioning by key.

| | | | | Keys are:

| | | | | l_orderkey ASC

| | | | | l_partkey ASC

| | | | | l_suppkey ASC

| | | | | Using I/O Size 2 Kbytes for index leaf pages.

| | | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | | | | Using I/O Size 2 Kbytes for data pages.

| | | | | With LRU Buffer Replacement Strategy for data pages.

184 | Chapter 5: Overview of Changes to the Query Processing Engine

Page 216: New.features.guide.to.Sybase.ase.15

SummaryASE 15’s new features improve the performance of the data server

with minimal or no tuning of the query processing engine. The

improved performance not only reduces the hardware resources

required, but also decreases the time a DBA spends performance tun-

ing. This in turn lowers the total cost of ownership (TCO), the goal

most managers strive to achieve.

Chapter 5: Overview of Changes to the Query Processing Engine | 185

Page 217: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 218: New.features.guide.to.Sybase.ase.15

Chapter 6

Detection and Resolution ofQuery Performance Issues

IntroductionThis chapter focuses on the detection and resolution of performance

issues at the individual query level. Performance and tuning is a

broad topic and could fill an entire book. This chapter, however, will

focus on showplan messages and set option show query processor set

commands. In pre-ASE 15 terms, set option show is utilized to dis-

play optimizer diagnostics. The detection and resolution of

performance issues with some of the new features are explored. In

particular, an emphasis is on diagnosing situations with incorrect par-

tition strategies in an ASE server. Further, this chapter demonstrates

the importance of proper partition strategy with examples on how to

detect and resolve problems that are a direct result of selecting the

wrong partition key or the wrong partition type.

Note: Throughout this chapter, the optimizer may be referenced as the

query processor and the Query Optimizer in addition to the customary

terminology of the optimizer.

For Sybase ASE 15, the showplan output has evolved to a new for-

mat. This chapter uses the new formats and offers insight into the

ASE 15 showplan messages through example. Further, this chapter

offers areas of consideration within ASE and external to ASE that

should be analyzed prior to the diagnosis of a query issue within

ASE.

Chapter 6: Detection and Resolution of Query Performance Issues | 187

Page 219: New.features.guide.to.Sybase.ase.15

The typical pre-ASE 15 traceflags utilized for SQL debugging,

such as dbcc traceon 302, 310, 311, and 317, are supplanted by the

set option show functionality for ASE 15. Older dbcc traceflags are

not necessary once a DBA understands the new diagnostics. The 302,

310, and other traceflags still exist in ASE 15; however, there is a

marginalized need to perform analysis with these traceflags. In fact,

the output of some of the diagnostic dbcc traceflags may offer less

information when compared to pre-ASE 15 servers. As such, the set

option show commands serve as a replacement and enhancement for

the traditional ASE traceflags. With these new set options, this chap-

ter demonstrates how to detect SQL performance problems and offers

insight into the various messages returned by a subset of these diag-

nostic commands.

Note: The traditional dbcc traceflags such as 302 and 310 will be elimi-

nated by Sybase within the near future. It is advisable for database

administrators and privileged system users to become familiar with the

new set options in preparation for the pending elimination of the tradi-

tional dbcc traceflags used for optimizer debugging purposes.

An Approach to Poor Query PerformanceDiagnosis

Now that we have touched on the changes to the diagnostic tools, a

framework for the resolution of poor query performance can be pre-

sented. First, an important question must be posed:

What should database administrators look at before

scrutinizing individual query performance?

Some initial diagnosis should be performed on ASE as a whole, on

the host server, and on the network. Query performance diagnosis is

not a simple or quick task, so the elimination of common environ-

mental issues prior to investing time with query-level analysis is

recommended. For those database administrators with significant

experience, this is common sense. For those just starting to enter the

database administration field, save resources and time by ensuring

the environment (ASE, host, network) is in order prior to proceeding.

To gain some insight about the server and host environments, a few

188 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 220: New.features.guide.to.Sybase.ase.15

tools are readily available to perform initial diagnosis before scruti-

nizing query-level performance.

From within ASE:

� MDA tables — See Chapter 14 for useful queries.

� syslogshold — Check for long-running transactions.

� QP Metrics — See Chapter 9 for how to use sysquerymetrics to

analyze query performance.

� sp_sysmon — Server-level details on engine utilization, context

switches, memory, disk use, and network traffic to and from

ASE. However, from a QP performance perspective, sp_sysmon

is a poor tool as it does not get to the process level and aggre-

gates all the counters across the monitored period, resulting in

leveling any spikes in performance that would indicate problems.

While sp_sysmon is still available and has some very limited

functionality that MDA does not yet support, such as Rep Agent

monitoring, the reality is that after ASE 12.5.0.3, the functional-

ity from a QP perspective is much better and more accurately

monitored with the MDA tables.

� sp_lock — Details on all locks at a specific point in time within

ASE as a whole. Search for incompatible or blocking locks. As

with sp_sysmon, the MDA tables can provide a better and more

accurately monitored solution for lock monitoring.

External to ASE:

� For Windows environments — Task Manager’s CPU and

memory usage

� For Unix environments:

• top — Host-level usage information

• prstat — More host-level usage information

• vmstat — Memory usage

Next, it is vital to understand the following:

Why does the optimizer sometimes pick the wrong plan?

For this topic, the subtitle of the section could be why do I think the

optimizer picked a wrong plan? Often the optimizer picks the right

plan based on the information about the data contained in the data-

bases. Unfortunately, the information the optimizer uses to pick the

plan isn’t always relevant nor does the optimizer have a complete and

accurate representation of the state of the data within the databases.

Chapter 6: Detection and Resolution of Query Performance Issues | 189

Page 221: New.features.guide.to.Sybase.ase.15

The optimizer is only as good as the completeness and quality of the

information it is provided. In other words, the optimizer works off

statistics to make decisions about the data. Database statistics are

samples that contain information representing qualities about the

data, such as the uniqueness and range of information contained by a

column, index, or partition.

Important Note: It is more important to have accurate statistics with

ASE 15 on columns, tables, or partitions in any user and system tables

compared to an older version of ASE. When in doubt about whether to

run update statistics with ASE 15, err on the side of caution and update

index statistics where possible.

Common Query Performance FactorsMany factors can lead the optimizer to make plan selections that are

less than optimal. Some poor query plan selections are poor selec-

tions due to differences between the actual state of ASE and its data,

and the gathered statistics ASE uses to make query plan selections.

When database administrators think the optimizer is wrong, often it is

not. As stated, the optimizer often has less than perfect information to

function accurately. Some common and not-so-common causes for

suboptimal query plan selection are as follows:

� Missing or invalid statistics

� Insufficient distribution steps

� Lack of indexing

� Poor indexing

� Data fragmentation

� Partition imbalance

� Server- or session-level configurations

� Over-engineered forceplan

� Invalid use of index force

• Index possibly forced by index ID number, where indexes on

server were dropped and created in a different order than

when index force was employed.

� Inefficient query plan forced by abstract plan

� Excessively skewed index values

190 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 222: New.features.guide.to.Sybase.ase.15

To determine if the database administrator or the users are, in fact,

correct, and the optimizer is making inefficient choices, the above

bullet items should be disproved as much as possible before declaring

the optimizer has made an incorrect decision. To accomplish this

objective, the next step is to eliminate the causes.

Eliminating Causes for Sub-Optimal Plan

Selection

Find Missing or Invalid Statistics

Examine the expected and actual row counts and I/O counts. These

are specified in either the output of the Plan Viewer or the output of

the set statistics plancost on diagnostic command. If there are large

or frequent discrepancies between the estimated and actual values,

perhaps the query performance problem is caused by old or invalid

statistics. Scrutinize the datachange() values for each column and

index on the query tables. How much has the data changed on key

columns between now and the last run of update statistics? If the

datachange function indicates no change to the data, invalid statistics

are probably not the cause of the optimizer’s poor plan selection,

unless the statistics were manually altered or deleted. If the

datachange function indicates data has changed beyond an accept-

able threshold in the table(s), update the statistics on the columns

utilized in the problem query where possible. An acceptable thresh-

old may be to accept datachange values of 10% or less. Be sure to

update secondary columns of indexes with the update index statistics

command as opposed to the update statistics command if composite

indexes are in place and might be used by the query. Run the diag-

nostic command set option show_missing_stats to determine if

additional statistics could benefit the query’s plan selection.

Consider Range Cell Density on Non-Unique Indexes

The default number of distribution steps in statistics ranges cells was

reduced to 20 in ASE 11.9.2. While this is sufficient for OLTP appli-

cations that are mainly concerned with inserting, updating, or

deleting small numbers of rows based on unique index or primary

key access, it can lead to inaccurate costing for non-unique indexes

Chapter 6: Detection and Resolution of Query Performance Issues | 191

Page 223: New.features.guide.to.Sybase.ase.15

on common columns in mixed workload environments. For example,

commonly application developers will place an index on a date col-

umn. Now consider a system that processes 100,000 orders per day.

At the end of three years, the order history table will have 75 million

rows. If distributed evenly, each of the distribution steps in the range

cells would cover over four million rows. The problem is that for

range queries involving one week or one month, the query optimiza-

tion cost is the same, as four million rows covers approximately 40

days per range cell. By increasing the number of distribution steps to

1,000 (via update index statistics table_name using 1,000 values),

each range cell now covers only over one day’s worth of data, pro-

viding much more accurate costing for typical DSS reporting for

weekly and monthly operations. The impact of increasing the number

of distribution steps is the slight increase in data caching require-

ments to hold the extra systabstats and sysstatistics entries — likely

much less than 1 MB of memory.

Identify Index Needs

Look at the showplan output for table scans and reformatting mes-

sages. Are indexes present on columns used in joins? Ensure the

columns used as search arguments (SARGS) have an available index.

Is there a need for a covering index? Would the query make use of a

functional index?

Identify Poor Index Strategy

Examine the showplan output for table scans. Are the current indexes

utilized? Are the existing indexes sufficiently selective?

Fragmentation of Data

Analyze the optdiag output on the key columns. Is fragmentation

reported? Look at the cluster ratios, forwarded rowcounts, and

deleted rowcounts in the optdiag output. Query the systabstats table

for fragmentation hints offered by the forwrowcnt and delrowcnt col-

umns. To remedy the problem, possibly drop/recreate clustered

indexes for single partition tables, or possibly drop and recreate local

indexes at the partition level for semantically partitioned tables. Use

bcp out/truncate/bcp in, or run the reorg commands to eliminate data

fragmentation.

192 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 224: New.features.guide.to.Sybase.ase.15

To prevent the fragmentation from reoccurring once it is fixed,

consider altering the table and changing the expected row size

(exp_row_size) to a value that covers 75% to 85% of the width of the

existing rows.

Resolve Partition Imbalance

Some options to resolve this problem are listed below. Additionally,

there are other ways to solve this problem that are not discussed here.

� Drop/recreate clustered indexes for round-robin partitions.

� Change partition strategy (for example, from range or list to

hash-based partitions if a unique partition key is available).

� Alter table to only one partition.

� Choose different partition ranges or partition keys for range

partitions.

� Add or subtract additional range partitions to apply balance to

range partitions.

� Choose different partition keys for list-based partitions, or

add/subtract partitions from list-based partitions.

Reset Server- or Session-level Options

As a general rule, accept server defaults for optimization strategies,

and eliminate session-level configuration settings. Often, server- or

session-level settings are applied for very specific problems. Altering

optimization configuration settings can limit the optimizer’s options

when it attempts to formulate the best query plan. Specific questions

to ask for this scenario would be:

� Was the server- or session-level optimization strategy changed at

any point, and is the change relevant to most queries within the

server?

� Is forceplan employed within the query?

� Are indexes forced within the query?

� Does the force effort need to go further and set join types?

� What is the optimization strategy specified at the server or ses-

sion level?

� Is an abstract plan employed?

Chapter 6: Detection and Resolution of Query Performance Issues | 193

Page 225: New.features.guide.to.Sybase.ase.15

Overengineered Forceplan

Remove instances of forced indexes from the query, and resubmit the

same query without the forceplan. Measure the performance differ-

ence between the forceplan execution and the execution with

forceplan disabled. Check whether the join order of tables is different

between the two executions. If the join order is different and perfor-

mance is better without the forceplan, this may be a sign to drop the

forceplan statement from the query.

Invalid Use of Index Force

Remove instances of forced index syntax from the query, and resub-

mit the same query without the forced index syntax. Measure the

performance difference between the execution with forced indexes

and without forced indexes. The performance difference can be mea-

sured with the following set commands:

� set statistics io on — Note any differences in I/O counts between

the two executions as well as consider the cardinality between

the scans of the tables compared to known data cardinalities. For

example, if there is an average of three items per order, for

nested loop join methods, one would expect the number of scans

per orderitem table to be approximately three times the orders

table. If orders of magnitude higher exist, this suggests an incom-

plete join or partial Cartesian product due to faulty group by or

subquery construction.

� set showplan on — Review the showplan output to determine if

index selection is different between the forced index execution

and the execution where the optimizer makes the index choice. If

the index selection is different, it is likely that the optimizer’s

index selection is more appropriate if the database’s statistics are

well maintained.

� set statistics time on — In instances where parallel execution is

employed, pay particular attention to the set statistics time mea-

surements. I/O counts can be higher when queries are executed in

parallel in comparison to non-parallel executed queries. Despite

the higher I/O counts with parallelism, the time to execute these

queries where I/O is high due to parallelism can be less than the

execution time on a similar query with no parallelism. Typically,

the time is most relevant to the users, and on servers that are not

194 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 226: New.features.guide.to.Sybase.ase.15

CPU or I/O bound, this discrepancy in I/O counts may be per-

fectly acceptable.

Inefficient Query Plan Forced by Abstract Plan

Remove the abstract plans and resubmit the queries. Prior to

resubmission, enable showplan, statistics I/O, and statistics time in

order to measure and note the differences between query executions.

When an abstract plan is employed, pay particular attention to those

options that cannot otherwise be forced with the more traditional

optimizer tools (forceplan, force index). Here are some suggestions

for where to note differences in queries where abstract plans are

employed:

� Did removal of the forceplan change the query’s join type? (For

example, did it change from a merge_join to a hash_join?)

� Did the cache strategy change?

� Has the parallel degree changed for a portion of the query?

If the answer is yes to any of these questions, consider dropping or

revising the abstract plan.

Query Processor “set options” — The BasicsAs mentioned in the introduction of this chapter, ASE 15 introduces

new query-level diagnostic commands, the “set options.” Here, the

various set option commands are listed along with the syntax to

extract various detail levels from each command. The following table

shows the different set options available for displaying the query pro-

cessor’s diagnostics.

Set Option Description

show Basic syntax common to all modules

show_lop Logical operators used

show_managers Data structure managers used

show_log_props Logical managers used

show_parallel Parallel query optimization

show_histograms Histograms processed

show_abstract_plan Details of an abstract plan

Chapter 6: Detection and Resolution of Query Performance Issues | 195

Page 227: New.features.guide.to.Sybase.ase.15

Set Option Description

show_search_engine Details of a search engine

show_counters Optimization counters

show_best_plan Best query plan

show_code_gen Code generation

show_pio_costing Estimates of physical input/output

show_lio_costing Estimates of logical input/output

show_elimination Partition elimination

show_missing_stats Stats missing that may have benefited optimization

The syntax to enable a set option to display the query processor’s

diagnostics is shown below, along with an example of how to enable

the show_histograms diagnostic output. Note the absence of quotes

or commas when enabling the diagnostic options.

set option option-name [normal | brief | long | on | off]

go

Example:

dbcc traceon(3604)

go

set option show_histograms normal

go

Note: Despite the initiative to migrate away from the old numeric dbcc

trace commands for optimizer diagnostic output (and the potential

replacement of these dbcc commands), the new set options require

traceflag 3604 set to “on” for the results to return to the client. This is in

line with the “classic” optimizer debugging as it exists in previous

releases of ASE.

The “set option” parameters are:

� normal — Brief, but not complete, diagnostic information

� brief — Very brief diagnostic information

� long — Verbose and complete output for a set option

� on — Acts as a toggle for a show option. If an option is enabled,

but is subsequently set to off, the option can be enabled with the

on option at the same level (normal, brief, or long) as previously

enabled.

� off — Used to disable a show option

196 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 228: New.features.guide.to.Sybase.ase.15

The following is a basic example to demonstrate the output of the set

option of show_best_plan:

dbcc traceon(3604)

go

set option show_best_plan on

go

select o.name

from master..sysobjects o,

master..syscolumns c

where o.id = c.id

and c.name = "id"

go

Output:

DBCC execution completed. If DBCC printed error messages, contact a user

with System Administrator (SA) role.

The command executed successfully with no results returned.

The best global plan (Pop tree) :

FINAL PLAN ( total cost = 329.0547 ):

lio=31.00147 pio=8.999993 cpu=420.52

( PopEmit cost: 0 props: [{}]

( PopMergeJoin cost: 329.0547 T(L31.00147,P8.999992,C420.52)

O(L0,P0,C74.52) props: [{}]

( PopRidJoin cost: 216.4013 T(L24.00071,P5.999996,C184)

O(L21.00071,P2.999996,C92) props: [{1}]

( PopIndScan cost: 90.2 T(L3,P3,C92) O(L3,P3,C92) props: [{1}]

Gti3( csyscolumns ) Gtt1( c ) )

( PopRidJoin cost: 105.2014 T(L7.00076,P2.999996,C162)

O(L5.00076,P0.9999962,C81) props: [{1}]

( PopIndScan cost: 62.1 T(L2,P2,C81) O(L2,P2,C81) props: [{1}]

Gti2( csysobjects ) Gtt0( o ) )

)

)

)

name

sysobjects

sysindexes

Chapter 6: Detection and Resolution of Query Performance Issues | 197

Page 229: New.features.guide.to.Sybase.ase.15

syscolumns

.

.

.

Note: The set option of “best plan” is a comprehensive set option,

meaning it will display the combined output of other set options into one

dump of query processor diagnostics while enabling only the one option.

For the particular excerpt above, nearly exact output can be obtained

with the set option show_pio_costing long.

Please note the complexity of the diagnostic output in the above

example. Much of the information displayed in this output is very

low-level plan information. Plan information at this level may not be

necessary to understand except when the involvement of Sybase

Technical Support is necessary. Sybase ASE 15 is designed to pro-

vide this information in an easy-to-obtain methodology for

specialized cases, or for occasions when very granular query proces-

sor tuning efforts are identified.

In the preceding example, some useful information from the out-

put is the I/O count and CPU information:

FINAL PLAN ( total cost = 329.0547 ):

lio=31.00147 pio=8.999993 cpu=420.52

Query Optimizer Cost Algorithm

Observe the final “best plan” carried a total cost of approximately

329. This plan comprised 31 logical I/Os, 9 physical I/Os, and 420

CPU cycles.

According to Sybase, the costing algorithm for ASE 15 to deter-

mine this total cost is:

Total Cost = (Physical I/O * 25) + (Logical I/O * 2) + 0.1 *

(number of rows expected to be compared or scanned in the

query)

Where number of rows expected to be compared or scanned in

the query is equivalent to “cpu costing” in the FINAL PLAN sec-

tion of this example.

198 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 230: New.features.guide.to.Sybase.ase.15

Using the same diagnostic output, the query cost is derived as

follows:

(9 * 25) + (31 * 2)+ (0.1 * 420)

225 + 62 + 42 = 329 Total Cost

Remember, ASE uses a cost-based optimizer, not something like a

“rules-based” optimizer. The optimizer will always select what is

resolved to as the plan that results in the lowest cost determined by

the costing algorithm presented above, with few exceptions.

Note: The weighting factors employed in the total cost formula are not

configurable.

ASE employs a cost-based optimizer or query processor. The total

cost derived with this algorithm will largely be representative of the

query plan selected by ASE 15. Additionally, since the formula is not

configurable, it may be less accurate in systems with very slow disk

read or write speeds. The algorithm may not be sufficient to accu-

rately cost and ultimately choose the best query plans in such

situations.

ASE 15 vs. 12.5.x Cost Algorithm

To better understand the costing algorithm associated with ASE 15,

compare the 12.5.x costing algorithm below:

Total Cost = (Physical I/O * 18) + (Logical I/O * 2)

Plugging in the numbers from the previous query will show the dif-

ference between the cost basis on ASE 15 and the cost basis on ASE

12.5.x:

(9 * 18) + (31 * 2)

162 + 62 = 222 Total Cost (ASE 12.5)

There are two main costing differences between ASE 15 and ASE

12.5.x. First, the penalty for physical I/O is greater by 7 units in ASE

15. ASE 15 also carries a cost for CPU costing, where pre-ASE 15

servers did not consider CPU cost. The incorporation of CPU costing

in the algorithm is an indirect result of some of the CPU-intensive

Chapter 6: Detection and Resolution of Query Performance Issues | 199

Page 231: New.features.guide.to.Sybase.ase.15

enhancements in ASE 15. These enhancements include hash-based

algorithms replacing I/O-intensive worktable creation. Worktable

creation is replaced with typically faster hashing algorithms for dis-

tinct, exists, group by, and order by statements, yet a cost is still

evident on the server, despite the absence of worktable overhead.

Instead of overlooking this cost, CPU cost was added to the costing

algorithm for ASE 15. Fortunately, CPU processes faster than work-

table creation. The second costing difference between ASE 12.5.x

and ASE 15 is the greater penalty for physical vs. logical I/O on the

ASE 15 server.

Query Processor “set options” — ExploredIn this section, a few of the set options for ASE 15 are explored.

After reading this section, database administrators can go forward

with a greater understanding of these diagnostics to support the

detection and resolution of query problems.

Reminder: Remember to enable the 3604 traceflag to return the diag-

nostic output of the set options to the client session.

show_missing_stats

The set option of show_missing_stats is used to return a message to

indicate if the optimizer determines that a column has optimization

potential. In order for this message to appear, no statistics or densities

can be available for this column. When the optimizer recognizes this

situation, a message can be returned to the user’s screen or to the

errorlog, depending on the traceflag settings.

The set option of show_missing_stats, even with long specified

as the display option, only shows hints of where the optimizer could

have used stats. In the following example, the optimizer indicates sta-

tistics would be beneficial on the code column of the customers table.

1> dbcc traceon(3604)

2> go

DBCC execution completed. If DBCC printed error messages, contact a user

with System Administrator (SA) role.

1> set option show_missing_stats long

2> go

200 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 232: New.features.guide.to.Sybase.ase.15

1> select * from OrderTracking..customers where code = 30000

2> go

NO STATS on column OrderTracking..customers.code

identification code status effectiveDate

-------------------- ----------- ------ --------------------------

Brian Taylor 30000 N Sep 14 2005 10:10PM

(1 row affected)

Enabling set statistics plancost on in conjunction with the display

option of missing stats allows the database administrator to verify if

stats are missing. Note the large discrepancy between estimated and

actual rows in this simple query:

1> set statistics plancost on

2> go

1> select * from OrderTracking..customers where code = 30000

2> go

NO STATS on column OrderTracking..customers.code

identification code status effectiveDate

-------------------- ----------- ------ --------------------------

Brian Taylor 30000 N Sep 14 2005 10:10PM

==================== Lava Operator Tree ====================

Emit

(VA = 1)

1 rows est: 6000

cpu: 0

/

TableScan

OrderTracking..customers

(VA = 0)

1 rows est: 6000

lio: 268 est: 1100

pio: 0 est: 138

============================================================

(1 row affected)

Chapter 6: Detection and Resolution of Query Performance Issues | 201

Page 233: New.features.guide.to.Sybase.ase.15

Update the statistics on the code column and reissue the same com-

mand. Our estimate of rows returned by the optimizer drops from

6000 to 2. A “missing stats” problem has been successfully solved

with this simple example.

1> select * from OrderTracking..customers where code = 30000

2> go

identification code status effectiveDate

-------------------- ----------- ------ --------------------------

Brian Taylor 30000 N Sep 14 2005 10:10PM

==================== Lava Operator Tree ====================

Emit

(VA = 1)

1 rows est: 2

cpu: 0

/

TableScan

OrderTracking..customers

(VA = 0)

1 rows est: 2

lio: 268 est: 1100

pio: 0 est: 138

============================================================

(1 row affected)

Note: It should be noted the estimated row counts would have been

estimated more inaccurately if the underlying table did not employ the

range partition strategy on the code column. The partition awareness of

the ASE 15 optimizer eliminated three unqualified partitions according to

the showplan output, thus offering an advantage over the same query

issued against a non-partitioned table containing the exact same data:

QUERY PLAN FOR STATEMENT 1 (at line 1).

1 operator(s) under root

The type of query is SELECT.

202 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 234: New.features.guide.to.Sybase.ase.15

ROOT:EMIT Operator

|SCAN Operator

| FROM TABLE

| OrderTracking..customers

| [ Eliminated Partitions : 1 3 4 ]

| Table Scan.

| Forward Scan.

| Positioning at start of table.

| Using I/O Size 16 Kbytes for data pages.

| With LRU Buffer Replacement Strategy for data pages.

show_elimination

The set option of show_elimination is useful for reviewing the steps

taken by the ASE 15 query processor to eliminate semantic partitions

from consideration by the query. With this diagnostic command, par-

tition elimination can be verified by the database administrator.

Consider the following query:

select effectiveDate, count(*)

from customers

where code = 30004

or (code < 42000

and code > 41000)

group by effectiveDate

order by 2 desc

Table semantically partitioned by range, defined as follows:

Partition key is the code column of the customers table.

Partition_Conditions

VALUES <= (15000)

VALUES <= (30000)

VALUES <= (45000)

VALUES <= (60000)

Based on the partition key and the range definitions, it is logical to

expect the first, second, and fourth partitions to be eliminated from

consideration by the Query Optimizer. showplan will verify the elim-

ination. The set option of show_elimination will explain why the

partitions were eliminated by the Query Optimizer.

Chapter 6: Detection and Resolution of Query Performance Issues | 203

Page 235: New.features.guide.to.Sybase.ase.15

Showplan excerpt:

| | | |SCAN Operator

| | | | FROM TABLE

| | | | brian

| | | | [ Eliminated Partitions : 1 2 4 ]

| | | | Table Scan.

| | | | Forward Scan.

| | | | Positioning at start of table.

| | | | Using I/O Size 16 Kbytes for data pages.

Excerpt from set option show_elimination long:

eliminating partition represented by predicates:

START Dumping Predicate Set

code<=15000 tc:{1} Gt:{1}

END Dumping Predicate Set

eliminating partition represented by predicates:

START Dumping Predicate Set

code>15000 tc:{1} Gt:{1}

AND

code<=30000 tc:{1} Gt:{1}

END Dumping Predicate Set

eliminating partition represented by predicates:

START Dumping Predicate Set

code>45000 tc:{1} Gt:{1}

AND

code<=60000 tc:{1} Gt:{1}

END Dumping Predicate Set

This output is only an example of the output generated by the set

option of show_elimination, but serves to demonstrate the informa-

tion presented by this diagnostic command. If a partition is not

eliminated, then show_elimination provides a tool to determine why a

partition was not eliminated from a query plan.

show_abstract_plan

The set option of show_abstract_plan is used to generate an abstract

plan for a query. This plan can then be extracted from the show_

abstract_plan output and used as a starting point for query tuning

with abstract plans. The following example demonstrates how to gen-

erate an abstract plan with the set option:

204 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 236: New.features.guide.to.Sybase.ase.15

dbcc traceon(3604)

go

set option show_abstract_plan long

go

select d.Name

from Distribution d,

Extension e

where d.distributionID = e.distributionID

and d.market = (select max(d2.market)

from Distribution d2

where d2.distributionID = d.distributionID)

go

Output:

The abstract plan (AP) of the final query execution plan:

( nested

( m_join

( i_scan Distribution_1CUP

( table ( d Distribution ) ) )

( i_scan Extension_1NUP

( table ( e Extension ) ) ) )

( subq

( scalar_agg

( i_scan Distribution_1CUP

( table ( d2 Distribution ) ) ) ) ) )

( prop

( table ( d Distribution ) )

( parallel 1 )

( prefetch 16 )

( lru ) )

( prop

( table ( e Extension ) )

( parallel 1 )

( prefetch 16 )

( lru ) )

( prop

( table ( d2 Distribution ) )

( parallel 1 )

( prefetch 2 )

( lru ) )

To experiment with the optimizer behavior, this AP can be modified and then

passed to the optimizer using the PLAN clause:

SELECT/INSERT/DELETE/UPDATE ... PLAN '( ... )

Chapter 6: Detection and Resolution of Query Performance Issues | 205

Page 237: New.features.guide.to.Sybase.ase.15

As the above message indicates, the abstract plan acquired from the

set option show_abstract_plan can be passed to the optimizer at

query execution with the plan clause. Thus, the database administra-

tor can modify the abstract query plan, where appropriate, and

resubmit it to the ASE server.

Note: It is possible the DBA-modified abstract query plan may not per-

form as efficiently as the optimizer-generated query plan.

Below, the same query is submitted with the abstract plan that was

captured by show_abstract_plan. As an example, one of the join

methods is changed from merge join to hash join in the abstract plan.

Note: The quotes around the abstract plan text in the proceeding query

are required in order to submit an abstract query plan.

select d.Name

from Distribution d,

Extension e

where d.distributionID = de.distributionID

and d.market = (select max(d2.market)

from Distribution d2

where d2.distributionID = d.distributionID)

PLAN '( nested

( h_join

( i_scan Distribution_1CUP

( table ( d Distribution ) ) )

( i_scan Extension_1NUP ( table (e Extension ) ) ) )

( subq

( scalar_agg

( i_scan Distribution_1CUP

( table ( d2 Distribution ) ) ) ) ) )

( prop ( table ( d Distribution ) )

( parallel 1 )( prefetch 16 ) ( lru ) )

( prop ( table ( e Extension ) )

( parallel 1 ) ( prefetch 16 ) ( lru ) )

( prop ( table ( d2 Distribution ) )

( parallel 1 ) ( prefetch 2 ) ( lru ) )'

go

Leaving the set option of show_abstract_plan on when executing the

above query with an abstract plan produced more details. Low-level

206 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 238: New.features.guide.to.Sybase.ase.15

costing information, as well as index selection, is part of the

expanded output:

Applying the input Abstract Plan

( nested ( h_join ( i_scan Distribution_idx1 ( table ( d Distribution ) ) )

( i_scan Extension_idx1 ( table ( e Extension ) ) ) ) ( subq (

scalar_agg ( i_scan Distribution_idx1 ( table ( d2 Distribution ) ) ) )

) ) ( prop ( table ( d Distribution ) ) ( parallel 5 ) ( prefetch 2 ) (

lru ) ) ( prop ( table (e Extension ) ) ( parallel 1 ) ( prefetch 16 ) (

lru ) ) ( prop ( table ( d2 Distribution ) ) ( parallel 1 ) ( prefetch 2

) ( lru ) )

Abstract Plan based Eqc priming...

OptBlock0 Eqc{0} -> Pops primed by the Abstract Plan:

( PopIndScan Distribution_idx1 d ) cost: 2443.8 T(L83,P82,C2278)

O(L83,P82,C2278) props: [{6}]

OptBlock0 Eqc{1} -> Pops primed by the Abstract Plan:

.

.

.

... done Abstract Plan Eqc priming.

The Abstract Plan (AP) of the final query execution plan:

( nested ( h_join ( i_scan Distribution_1idx1 ( table ( d Distribution ) )

) ( i_scan Extension_idx1 ( table (e Extension ) ) ) ) ( subq (

scalar_agg ( i_scan Distribution_idx1 table ( d2 Distribution ) ) ) ) )

) ( prop ( table ( d Distribution ) ) ( parallel 1 ) ( prefetch 2 ) (

lru ) ) ( prop ( table ( e Extension ) ) ( parallel 1 ) ( prefetch 16 )

( lru ) ) ( prop ( table ( d2 Distribution ) ) ( parallel 1 ) ( prefetch

2 ) ( lru ) )

To experiment with the optimizer behavior, this AP can be modified and then

passed to the optimizer using the PLAN clause:

SELECT/INSERT/DELETE/UPDATE ... PLAN '( ... )

Why Use Abstract Plans for ASE 15?

So why use abstract plans for ASE 15? ASE 15 permits optimizer

hints, forceplan, and index hints but this is not sufficient to modify all

tunable aspects of a query’s plan. Abstract plans allow the DBA to

change any aspect of the query plan, such as join types, join orders,

parallelism, cache strategy, and index usage. As an example, the

modification of abstract query plans allows the DBA to influence

query join types at the query level. Without the ability to modify

abstract plans, it would be necessary to set session- or server-level

join types for ASE to consider a specified join type.

Chapter 6: Detection and Resolution of Query Performance Issues | 207

Page 239: New.features.guide.to.Sybase.ase.15

Application of Optimization ToolsSybase ASE 15 supports the use of many optimization techniques

and tools. The use of these tools requires an expert level of under-

standing since improper application of query optimization tools can

degrade performance for ASE as a whole. In this section, the follow-

ing optimization techniques are discussed from a standpoint of how

to recognize problems resulting from the improper application of

optimization strategies in ASE 15.

Optimization Goal Performance Analysis

As indicated in Chapter 5, use optimization goals with caution. When

optimization goals are altered by the database administrator, it may

be necessary to periodically benchmark queries where optimization

goals are employed. This benchmarking is necessary since the opti-

mization goals instruct a query to perform optimization based upon

known database usage patterns of OLTP, DSS, or mixed-use systems

as an example. Over time, the usage balance of queries can change

from one type to another, or the ratio of OLTP to DSS queries can

change on a system. It is especially important to pay attention to the

usage patterns on systems where optimization goals are set at a more

broad level, such as at the server level.

To illustrate how optimization goals can affect query plans in an

adverse manner, the following example shows the optimization goal

of allrows_oltp employed on an OLTP system at the server level. As a

first line of problem detection related to the incorrect employment of

optimization goals, the set statistics io and set statistics time diagnos-

tic commands are enabled.

select e.eventID, el.eventTypeCode, e.actualEndTime

from Event e,

EventList el

where e.eventID = el.eventID

and e.eventID = 4

and el.eventTypeCode = 1

and e.actualEndTime = (select min(e2.actualEndTime)

from Event e2

where e.eventID = e2.eventID)

go

208 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 240: New.features.guide.to.Sybase.ase.15

Statistics I/O, time with optimization goal of allrows_oltp:

============================================================

Table: EventList scan count 1, logical reads: (regular=6 apf=0 total=6),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Event scan count 1, logical reads: (regular=95 apf=0 total=95),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Event scan count 756, logical reads: (regular=24189 apf=0

total=24189), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

Execution Time 10.

SQL Server cpu time: 1000 ms. SQL Server elapsed time: 993 ms.

Statistics I/O with no optimization goal set:

============================================================

Table: EventList scan count 1, logical reads: (regular=6 apf=0 total=6),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Event scan count 1, logical reads: (regular=95 apf=0 total=95),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Event scan count 1, logical reads: (regular=31 apf=0 total=31),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

Execution Time 0.

SQL Server cpu time: 0 ms. SQL Server elapsed time: 53 ms.

In this example, the logical I/O required to satisify the query is

adversely impacted due to the use of a poorly selected optimization

goal. In terms of query performance degradation, the logical I/Os per-

formed on the second scan of the event table where a poorly set

optimization goal is employed increased to 24,189 in comparison to

the 31 I/Os performed on the event table where the optimization goal

is selected by the optimizer.

The conclusion that can be drawn from this example is to use

common sense when employing optimization goals! The query in this

example is typical of a query performed on a DSS system. The opti-

mization goal is set to allrows_oltp, an optimization strategy that is

not appropriate for DSS queries.

This example illustrates a more important point: Restrict the use

of optimization goals to the most granular level possible, especially

on systems where mixed query types are standard. For the server as a

whole, leave the optimization goals at the default ASE setting, unless

a great majority of queries issued in ASE would benefit from

non-default optimization goals.

Chapter 6: Detection and Resolution of Query Performance Issues | 209

Page 241: New.features.guide.to.Sybase.ase.15

Optimization Criteria Performance Analysis

ASE 15 provides database administrators a mechanism to specify

query optimization criteria, such as the ability to suggest join strate-

gies for queries through abstract query plans. While database

administrators and users may properly apply and understand the

application of ASE optimization criteria, in some instances the appli-

cation of optimization criteria may be incorrect. Additionally, the

application of optimization criteria may be correct given the charac-

teristics and volume of data within the database. This, however, can

prove to be incorrect as data volume and characteristics change over

time. To offer a strategy for the detection of misapplied query optimi-

zation criteria, an example is presented to demonstrate the detection

of optimization criteria issues:

select e.eventID, el.eventTypeCode, e.actualEndTime

from Event e,

EventList el

where e.eventID = el.eventID

and e.eventID = 4

and el.eventTypeCode = 1

and e.actualEndTime = (select min(e2.actualEndTime)

from Event e2

where e.eventID = e2.eventID)

go

To detect performance related problems with optimization criteria,

the set statistics io and statistics time diagnostics are enabled prior to

query execution. For this example, one query is executed with the

ASE-generated abstract plan in an intact format. In the second execu-

tion of the same query, the optimization criteria are altered with the

selection of a different join strategy. The join strategy is suggested

through the context of the abstract plan. The change in join type is

from a merge join (m_join) to a nested loop join (nl_join).

The modified abstract plan, with change in italics:

plan "( nl_join

( sort

( nested

( i_scan EventList_3NNX

( table ( el EventList ) ) )

( subq

( scalar_agg

210 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 242: New.features.guide.to.Sybase.ase.15

( i_scan Event_1CUP

( table ( e2 Event ) ) ) ) ) ) )

( sort

( i_scan Event_7NNX

( table ( le Event ) ) ) ) )

( prop ( table ( el EventList ) )

( parallel 1 ) ( prefetch 2 ) ( lru ) )

( prop ( table ( e2 Event ) )

( parallel 1 ) ( prefetch 16 ) ( lru ) )

( prop ( table ( e Event ) )

( parallel 1 ) ( prefetch 16 ) ( lru ) )"

Output from server-generated optimization criteria:

Table: EventList scan count 1, logical reads: (regular=6 apf=0 total=6),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Event scan count 1, logical reads: (regular=95 apf=0 total=95),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Event scan count 1, logical reads: (regular=31 apf=0 total=31),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

Execution Time 0.

SQL Server cpu time: 0 ms. SQL Server elapsed time: 56 ms.

Output from user-modified optimization criteria:

Table: EventList scan count 1, logical reads: (regular=6 apf=0 total=6),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Event scan count 1, logical reads: (regular=95 apf=0 total=95),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: Event scan count 756, logical reads: (regular=24189 apf=0

total=24189), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

Execution Time 11.

SQL Server cpu time: 1100 ms. SQL Server elapsed time: 1136 ms.

The performance degradation where optimization criteria of nl_join is

specified at the query level is obvious based on the output of the sta-

tistics io and statistics time diagnostics. The query where the

optimizer selected the join type of m_join performed only 31 I/Os on

the event table vs. the 24,189 I/Os on the same table where the opti-

mization criteria was manipulated through the abstract plan. To solve

this query problem, elimination of the optimization criteria is the

obvious solution for this query based on the additional I/O required

where the user has modified the query’s plan. This further supports

the problem diagnostic steps outlined at the beginning of this chapter,

Chapter 6: Detection and Resolution of Query Performance Issues | 211

Page 243: New.features.guide.to.Sybase.ase.15

which suggest the removal of user-modified join criteria when begin-

ning a query performance exercise.

Optimization Timeout Analysis

The goal of the ASE 15 Query Optimizer is to quickly obtain a work-

able query plan and move on from the optimization stage of query

processing. In some cases, users of ASE may be too restrictive with

their settings of optimization time limitations, causing the optimizer

to quickly select a plan that may not be optimal.

Note that when referring to optimization timeout, a complete

query plan must be known in order for optimization to time out. The

simple reason is that ASE uses the projected execution costs from the

best plan found to predict an execution time. Consequently, the fear

that the server might “timeout” without finding any query plan is

ungrounded.

The ability to time out query optimization is a very useful tool in

DSS/mixed workload environments. In previous releases of ASE, the

only means of controlling optimization of complex queries was

through the set tablecount # command. This optimization strategy

typically resulted in increasing the search space from the default of 4

to 6, 8, or higher depending on query complexity. The issue with this

optimization strategy was that while the increased search space often

could find a better plan, the search space increased by an exponential

factor, often quoted as n! (although this is inaccurate). This exhaus-

tive searching often could require more time than the actual query

execution. For example, consider a query involving a 12-way join,

which involves tables containing tens of millions of rows in each

table. With the default search space, the query may never return. By

setting the search space to 12, the query optimization took 10 minutes

and the query execution only took 30 seconds! Unfortunately, much

of the 10 minutes was spent exhaustively searching the nearly 500

million possible permutations (12!), whereas the optimal plan was

found early in the search space. The ability to limit the optimization

timeout will allow complex queries to have efficient execution with-

out excessive optimization.

While the ability to restrict optimization times is a powerful and

useful tool, the application of such restriction can cause query-level

performance issues when it is misapplied. Consider the showplan

excerpts below that illustrate a misapplication problem. Note the

212 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 244: New.features.guide.to.Sybase.ase.15

query plan from the first example has no optimization timeout limit

while the query in the second example does have an optimization

timeout limit.

The query with opttimeoutlimit set to 1 chose a table scan over an

index scan and does not take advantage of partition elimination on

the second table in this example. With the optimization timeout left at

the default value of 10, the optimizer selects a more efficient query

plan since the optimizer had more time to consider additional query

plans.

No opttimeoutlimit:

QUERY PLAN FOR STATEMENT 1 (at line 1).

15 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|SQFILTER Operator has 2 children.

|

| |N-ARY NESTED LOOP JOIN Operator has 5 children.

| |

| | |MERGE JOIN Operator (Join Type: Inner Join)

| | | Using Worktable1 for internal storage.

| | | Key Count: 1

| | | Key Ordering: ASC

| | |

| | | |SCAN Operator

| | | | FROM TABLE

| | | | customers

| | | | c1

| | | | [ Eliminated Partitions : 2 3 4 ]

| | | | Index : customers_idx3

| | | | Forward Scan.

| | | | Positioning by key.

| | | | Keys are:

| | | | code ASC

| | | | Using I/O Size 2 Kbytes for index leaf pages.

| | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | | | Using I/O Size 2 Kbytes for data pages.

| | | | With LRU Buffer Replacement Strategy for data pages.

| | |

| | | |SCAN Operator

| | | | FROM TABLE

| | | | authors

| | | | a1

Chapter 6: Detection and Resolution of Query Performance Issues | 213

Page 245: New.features.guide.to.Sybase.ase.15

| | | | [ Eliminated Partitions : 2 3 4 ]

| | | | Index : authors_idx3

| | | | Forward Scan.

| | | | Positioning by key.

| | | | Keys are:

| | | | code ASC

| | | | Using I/O Size 2 Kbytes for index leaf pages.

| | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | | | Using I/O Size 2 Kbytes for data pages.

| | | | With LRU Buffer Replacement Strategy for data pages.

Set opttimeoutlimit to 1:

QUERY PLAN FOR STATEMENT 1 (at line 1).

16 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|SQFILTER Operator has 2 children.

|

| |N-ARY NESTED LOOP JOIN Operator has 5 children.

| |

| | |MERGE JOIN Operator (Join Type: Inner Join)

| | | Using Worktable2 for internal storage.

| | | Key Count: 1

| | | Key Ordering: ASC

| | |

| | | |SCAN Operator

| | | | FROM TABLE

| | | | customers

| | | | c1

| | | | [ Eliminated Partitions : 2 3 4 ]

| | | | Index : customers_idx3

| | | | Forward Scan.

| | | | Positioning by key.

| | | | Keys are:

| | | | code ASC

| | | | Using I/O Size 2 Kbytes for index leaf pages.

| | | | With LRU Buffer Replacement Strategy for index leaf

pages.

| | | | Using I/O Size 2 Kbytes for data pages.

| | | | With LRU Buffer Replacement Strategy for data pages.

| | |

| | | |SORT Operator

| | | | Using Worktable1 for internal storage.

214 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 246: New.features.guide.to.Sybase.ase.15

| | | |

| | | | |SCAN Operator

| | | | | FROM TABLE

| | | | | authors

| | | | | a1

| | | | | Table Scan.

| | | | | Forward Scan.

| | | | | Positioning at start of table.

| | | | | Using I/O Size 16 Kbytes for data pages.

| | | | | With LRU Buffer Replacement Strategy for data pages.

The difference in I/O counts between the queries further emphasizes

that database administrators should not be too restrictive using opti-

mization timeouts. Note the difference in logical reads on the authors

table. The query with the optimization timeout limit set to 1 required

868,239 logical reads on the authors table vs. the logical reads of

only 1,455 for the query without an optimization timeout limit.

Query with opttimeoutlimit set to 1:

Table: customers scan count 1, logical reads: (regular=19 apf=0 total=19),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: customers scan count 1, logical reads: (regular=19 apf=0 total=19),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: orders scan count 81, logical reads: (regular=195 apf=0 total=195),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: orders scan count 243, logical reads: (regular=591 apf=0 total=591),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: inventory scan count 729, logical reads: (regular=1779 apf=0

total=1779), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: authors scan count 8748, logical reads: (regular=868239 apf=0

total=868239), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: authors scan count 28, logical reads: (regular=2779 apf=0

total=2779), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

Query with opttimeoutlimit not set:

Table: customers scan count 1, logical reads: (regular=19 apf=0 total=19),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: customers scan count 4, logical reads: (regular=397 apf=0

total=397), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: ordersscan count 21, logical reads: (regular=51 apf=0 total=51),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: orders scan count 63, logical reads: (regular=159 apf=0 total=159),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Chapter 6: Detection and Resolution of Query Performance Issues | 215

Page 247: New.features.guide.to.Sybase.ase.15

Table: inventory scan count 189, logical reads: (regular=483 apf=0

total=483), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: authors scan count 567, logical reads: (regular=1455 apf=0

total=1455), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Table: authors scan count 28, logical reads: (regular=2779 apf=0

total=2779), physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

For detection of problems with optimization timeouts, the best form

of detection and resolution is prevention, whereas DBAs only apply

optimization timeouts for very specific problem queries where the

plans are very complex and optimization is expected to take a large

portion of total query time. By query time it is meant the time to pick

a query plan, satisfy the data retrieval, and return results, and not

“wall clock” time.

Recommendation: If there is a need to set optimization timeouts, set

the limits at the session level for specific queries, and not server wide.

Suggested Approach to Fix Optimization Timeout

Problems

If a situation arises where performance degrades on queries where

optimization timeouts are employed, eliminate the optimization time-

out for the query or provide a more generous timeout limit for the

optimizer. As data changes on a server, it is possible for the optimizer

to need more time to select an optimal query plan. To remedy a

restrictive optimization timeout limit, try to increment the optimiza-

tion timeout limit by 1% iteratively, with no exec on. Verify if ASE

arrives at a different query plan with each successive run. If different

query plans are obtained with each run, especially where the query

plan difference of successive executions is significant, optimization

timeout limits should be raised until the optimizer begins to arrive at

a similar plan on repeated executions.

The total execution time, represented by the statistics time out-

put, can also offer a clue on where the breaking point is for

optimization timeout limits. For demonstration, the query used to

generate the above plan, with optimization timeout of 1, is executed

iteratively while increasing optimization timeouts by 1 for each run.

Statistics time is enabled for each execution, and the following results

are tabulated:

216 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 248: New.features.guide.to.Sybase.ase.15

opttimeoutlimit Setting CPU Time Comments

1 19500 ms � Very restrictive

2 19400 ms

3 19400 ms

4 19200 ms

5 200 ms

6 100 ms � Set timeout here

7 100 ms

8 100 ms � No more benefit

As the table shows, as the optimization timeout limit was increased,

there was a significant gain in performance when the timeout limit’s

restrictions were eased toward the recommended setting of 6 for this

query. As more time was given to the optimizer beyond the limitation

of 6, the optimizer did not continue to choose a better plan as it did

when the optimization timeouts were increased from 1 to 5. For this

exercise, despite the arrival at an optimization timeout limit of 6, the

exercise was carried forward for two additional iterations to ensure

additional performance gains were not achieved with further

increases to optimization timeout limits.

This exercise could be applied to help the database administrator

decide where to set optimization timeout limits for queries. As dem-

onstrated here, it is possible to find the best optimization timeout

limit for a query through this iterative approach.

Detection, Resolution, and Prevention ofPartition-related Performance Issues

As referenced in the chapter introduction, the new semantic partitions

feature requires the database administrator to make the correct parti-

tion type selection in the physical design stage of the database.

Failure to make the correct partition type selection can result in per-

formance degradation for ASE 15. It should also be noted that the

selection of the partition key is important during the implementation

of semantic partitions. Finally, partition maintenance is important

with ASE 15’s semantic partitions. The maintenance of statistics on

the indexes, columns, and tables provides the ASE 15 optimizer with

information important to the optimizer for best plan selection. Table

Chapter 6: Detection and Resolution of Query Performance Issues | 217

Page 249: New.features.guide.to.Sybase.ase.15

maintenance tasks, such as reorgs at the partition level, help the

optimizer to utilize indexes in the most efficient manner.

The importance of selecting the correct partition type, the selec-

tion of optimal partition keys, and the maintenance of statistics for

tables in ASE 15 is highlighted by example in the next sections.

Data Skew Due to Incorrect Partition Type or

Poor Partition Key Selection

Partition skew can impact query performance. For example, consider

a table with customer data partitioned by range. A different partition

strategy or partition key will need to be considered. For help on parti-

tion strategy or partition key selection, refer to Chapter 3. In

evaluation of the data represented in Figure 6-1, the data is skewed

by the absence of customer activity for the customers whose code

column maps to the fourth partition of the customers table.

The alter table definition below was used to partition this table,

which is range partitioned with the code column used as the partition

key:

alter table customers partition by range(code)

(p1 values <= (15000),

p2 values <= (30000),

p3 values <= (45000),

p4 values <= (60000))

218 | Chapter 6: Detection and Resolution of Query Performance Issues

Partition p1

155000rows

Partition p2

165000rows

Partition p3

115000rows

Partition p4

15000rows

Customers Table

Figure 6-1

Page 250: New.features.guide.to.Sybase.ase.15

Obviously, for any query that accesses data mapped to partition p4,

query performance will exceed the performance of a query accessing

partitions p1, p2, or p3. However, since partition p4 represents only a

small percentage of the data, and likely a small percentage of any

data access operations, most queries will perform less optimally. The

partition strategy results in data skew, which should be avoided in

most situations with partitioned data. To remedy any query perfor-

mance issues due to partition imbalance, consider altering the

partition key or modification of the partition type. One option is to

consider a replacement partition key. A possible replacement is the

effectiveDate column of the table. Here, activity would be directed to

a partition based on the date of activity, and not the customer ID. To

remedy query performance issues here, a hash-based partition is

employed to better balance the data across partitions.

To change the partition strategy to hash, first the table partitions

must be combined. This is done by altering the partition type to

round-robin with partition degree of 1:

alter table customers partition by roundrobin 1

Then alter the table to hash partition with a partition degree of 4. The

hashkey of the code column is acceptable to achieve a partition bal-

ance due to the uniqueness of the data in this column:

alter table customers partition by hash(code)

(p1,p2,p3,p4)

After the alter table is complete, the partitions are more balanced as

depicted in Figure 6-2:

Chapter 6: Detection and Resolution of Query Performance Issues | 219

Partition p1

112000rows

Partition p2

115000rows

Partition p3

110000rows

Partition p4

113000rows

Customers Table (hash)

Figure 6-2

Page 251: New.features.guide.to.Sybase.ase.15

Effect of Invalid Statistics on Table

Semantically Partitioned

As we have stressed earlier in this chapter, it is very important to

maintain statistics in ASE 15. Here, an example is presented where

statistics are not maintained sufficiently on a table that employs

semantic partitions. To analyze the impact of poorly maintained sta-

tistics, the set statistics plancost and set statistics time diagnostic

commands are employed:

set statistics plancost on

set statistics time on

go

select a1.identification, a1.code, a1.effectiveDate

from brian a1

where a1.code=30001

and a1.effectiveDate = (select max(a2.effectiveDate)

from brian a2

where a1.code = a2.code)

go

Diagnostic output:

==================== Lava Operator Tree ====================

Emit

(VA = 4)

1 rows est: 12

cpu: 1600

/

SQFilter

(VA = 3)

1 rows est: 12

/ \

IndexScan ScalarAgg

brian_idx3 (a1) Max

(VA = 0) (VA = 2)

100001 rows est: 12 1 rows est: 1

lio: 6999 est: 9 cpu: 600

pio: 0 est: 9

/

IndexScan

brian_idx3 (a2)

(VA = 1)

220 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 252: New.features.guide.to.Sybase.ase.15

100001 rows est: 12

lio: 7010 est: 9

pio: 1 est: 9

============================================================

Execution Time 16.

SQL Server cpu time: 1600 ms. SQL Server elapsed time: 1596 ms.

For this example, the number of expected, scanned rows with the use

of index brian_idx3 is 12, while the actual rows scanned were

100,001. Similarly, the logical I/O (lio) for the scan of brian_idx3

was estimated at 9, while the actual I/O measurement was 6,999.

Since the optimizer relies upon statistics to make assumptions

about data, invalid or old statistics can cause the optimizer to make

poor query plan selections.

After the statistics were updated on the tables in the above exam-

ple, observe the estimated rowcounts and I/O estimates are more

inline with the actual counts. Additionally, the optimizer chose a dif-

ferent, and more efficient, plan. Note the change to the table scan

below where the optimizer recognized a greater efficiency to perform

the table scan with improved statistics information. Finally, the total

query execution time decreased by 12.5% in this example.

==================== Lava Operator Tree ====================

Emit

(VA = 4)

1 rows est: 100002

cpu: 1400

/

SQFilter

(VA = 3)

1 rows est: 100002

/ \

TableScan ScalarAgg

brian (a1) Max

(VA = 0) (VA = 2)

100001 rows est: 100002 1 rows est: 1

lio: 4693 est: 11847 cpu: 600

pio: 0 est: 1366

/

IndexScan

brian_idx3 (a2)

(VA = 1)

Chapter 6: Detection and Resolution of Query Performance Issues | 221

Page 253: New.features.guide.to.Sybase.ase.15

100001 rows est: 75821

lio: 7010 est: 24743

pio: 0 est: 1594

============================================================

Execution Time 14.

SQL Server cpu time: 1400 ms. SQL Server elapsed time: 1416 ms.

As for the impact to the semantic partitioned tables in this example,

in both queries the optimizer was able to maintain partition aware-

ness, despite the absence of useful statistics for the first execution of

the query. The following is from the showplan output of the first

execution:

| |SCAN Operator

| | FROM TABLE

| | brian

| | a1

| | [ Eliminated Partitions : 1 2 4 ]

| | Index : brian_idx3

| | Forward Scan.

| | Positioning by key.

| | Keys are:

| | code ASC

| | Using I/O Size 16 Kbytes for index leaf pages.

| | With LRU Buffer Replacement Strategy for index leaf pages.

| | Using I/O Size 16 Kbytes for data pages.

| | With LRU Buffer Replacement Strategy for data pages.

The output indicates only one partition, partition 3, was necessary to

satisfy the query.

222 | Chapter 6: Detection and Resolution of Query Performance Issues

Page 254: New.features.guide.to.Sybase.ase.15

SummaryThe detection and resolution of query-level problems in any rela-

tional database system can often be considered an art and not an

exact science. The steps to detect and resolve problems in this chap-

ter are the steps the authors have utilized to detect and resolve

query-level problems with ASE on version 15 and prior. Many data-

base administrators utilize their own methodology to detect and

resolve problems within ASE. This chapter’s purpose is to help the

database administrator become more aware of the tools available in

the ASE 15 release, and build an awareness of the performance issues

that can be caused with the improper application of some of the new

optimization tools. The chapter also helps to build familiarity through

application of the new ASE diagnostic utilities set options and statis-

tics plancost.

Chapter 6: Detection and Resolution of Query Performance Issues | 223

Page 255: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 256: New.features.guide.to.Sybase.ase.15

Chapter 7

Computed Columns

Computed columns were introduced in Sybase ASE 15 to provide

easier data manipulation and faster data access. This chapter gives an

overview of computed columns, explains materialization and deter-

ministic properties, and explores their impact on computed columns.

This chapter also discusses the benefits of computed columns.

Some of the ASE system stored procedures and tables were

enhanced to support computed columns. These changes are listed at

the end of the chapter.

The indexes for computed columns are discussed in Chapter 8,

“Functional Indexes.”

IntroductionComputed columns are columns that are defined by an expression.

This expression can be built by combining regular columns in the

same row and may contain functions, arithmetic operators, case

expressions, global variables, Java objects, and path names.

For example:

create table parts_table

(part_no int,

name char(30),

list_price money,

quantity int,

total_cost compute quantity*list_price

)

Chapter 7: Computed Columns | 225

Page 257: New.features.guide.to.Sybase.ase.15

In the above example, total_cost is a computed column defined by an

arithmetic operation between the columns list_price and quantity.

The datatype of total_cost is automatically inferred from the com-

puted column expression. Each system datatype has a datatype

hierarchy, which is stored in the systypes system table. The datatype

hierarchy determines the results of computations using values of dif-

ferent datatypes. The result value is assigned the datatype that has the

lowest hierarchy. In this example, total_cost is a product of the

money and int datatypes. Int has a hierarchy of 15 and money has a

hierarchy of 11. Therefore, the datatype of the result is money.

Key ConceptsMaterialization and deterministic characteristics are important con-

cepts to understand. These concepts affect the behavior of the

computed columns.

Materialization

Materialized columns are preevaluated and stored in the table when

base columns are inserted or updated. The values associated with the

computed columns are stored in both the data row and the index row.

Any subsequent access to a materialized column does not require

reevaluation; its preevaluated result is accessed.

Columns that are not materialized are also called virtual col-

umns; virtual columns become materialized when they are accessed.

If a column is virtual, or not materialized, its result value must be

evaluated each time the column is accessed. This means that if the

virtual computed column expression is based on, or calls, a

nondeterministic expression, it may return different values each time

you access it. You may also encounter run-time exceptions, such as

domain errors, when you access virtual computed columns. The con-

cept of nonmaterialized columns is similar to “views,” where the

definition of the view is evaluated each time the view is called.

A materialized column is reevaluated only when one of its base

columns is updated. A nonmaterialized, or virtual, computed column

becomes a materialized computed column once it is used as an index

key.

226 | Chapter 7: Computed Columns

Page 258: New.features.guide.to.Sybase.ase.15

A computed column is defined as materialized or nonmateri-

alized during the create table or alter table process using the keyword

materialized or not materialized after the column name. The default is

not materialized.

The following example illustrates the difference between materi-

alized and nonmaterialized columns:

create table rental_materialized

(cust_id int, start_date as getdate()materialized,

last_change_dt datetime)

insert into rental_materialized (cust_id, last_change_dt)

values (1,getdate())

insert into rental_materialized (cust_id, last_change_dt)

values (2, getdate())

select * from rental_materialized

cust_id start_date last_change_dt

-------- ------------------- --------------

1 Mar 16 2005 3:14PM Mar 16 2005 3:14PM

2 Mar 16 2005 3:14PM Mar 16 2005 3:14PM

create table rental_not_materialized

(cust_id int, start_date as getdate(),

last_change_dt datetime)

insert into rental_not_materialized (cust_id, last_change_dt)

values (1,getdate())

insert into rental_not_materialized (cust_id, last_change_dt)

values (2, getdate())

select * from rental_not_materialized

cust_id start_date last_change_dt

-------- ------------------- --------------

1 Mar 30 2005 4:00PM Mar 16 2005 3:14PM

2 Mar 30 2005 4:00PM Mar 16 2005 3:14PM

In the above set of examples, the rental_materialized table has a

materialized computed column start_date. In the rental_not_material-

ized table, this column is nonmaterialized. Note the difference in the

start_date column between the two result sets. In the first example,

Chapter 7: Computed Columns | 227

Page 259: New.features.guide.to.Sybase.ase.15

the start_date value is set when the row is inserted. In the

nonmaterialized example, the start_date value is computed when the

query executes.

Tip: As shown in the above example, there could be a huge difference

in the actual values of the result set between using a materialized column

and a nonmaterialized column. In order to increase the readability and

decrease confusion, always specify the keyword not materialized for

nonmaterialized columns even though it is the default.

Deterministic Property

A deterministic algorithm will always produce the same output for a

given set of inputs. Expressions and functions using deterministic

algorithms exhibit a deterministic property. On the other hand,

nondeterministic expressions may return different results each time

they are evaluated, even when they are called with the same set of

input values.

A good example of a nondeterministic function is the getdate()

function. It always returns the current date, which is different each

time the function is executed. Any expression built on a nondeter-

ministic function will also be nondeterministic. For example, age

(getdate() minus date of birth) will also be nondeterministic. Also, if

a function’s return value depends on factors other than input values,

the function is probably nondeterministic. A nondeterministic func-

tion need not always return a different value for the same set of

inputs. It just cannot guarantee the same result each time.

The following example illustrates the deterministic property:

create table Employee

(emp_id int,

emp_name varchar(30),

formatted_name compute upper(emp_name),

date_of_birth datetime,

emp_age compute datediff(year,

date_of_birth, getdate()))

insert into Employee

(emp_id, emp_name, date_of_birth)

values (1, "nareSh AdUrTy", "01/01/1970")

select * from Employee

228 | Chapter 7: Computed Columns

Page 260: New.features.guide.to.Sybase.ase.15

emp_id emp_name formatted_name

date_of_birth emp_age

----------- ---------------------- -----------------------------

-------------------------- -----------

1 nareSh AdUrTy NARESH ADURTY

Jan 1 1970 12:00AM 35

In the above example, the column formatted_name has a determinis-

tic property, whereas the emp_age column has a nondeterministic

property.

Relationship between Deterministic Property

and Materialization

It is important to understand that a computed column can be defined

as materialized or nonmaterialized independently of the deterministic

property. The following matrix illustrates the possible combination of

properties for a computed column:

Deterministic Nondeterministic

Materialized Deterministic and

materialized

Nondeterministic and

materialized

Nonmaterialized Deterministic and

nonmaterialized

Nondeterministic and

nonmaterialized

Deterministic and Materialized Computed Columns

Deterministic materialized computed columns always have the same

values; however, often they are reevaluated.

Deterministic and Nonmaterialized Computed Columns

Nonmaterialized columns can be either deterministic or

nondeterministic. A deterministic nonmaterialized column always

produces repeatable results, even though the column is evaluated

each time it is referenced. For example:

select emp_id, date_of_birth from Employee

where formatted_name = "NARESH ADURTY"

This statement always returns the same result if the data in the table

does not change.

Chapter 7: Computed Columns | 229

Page 261: New.features.guide.to.Sybase.ase.15

Nondeterministic and Materialized Computed Columns

Nondeterministic and materialized computed columns result in

repeatable data. They are not reevaluated when referenced in a query.

Instead, Adaptive Server uses the preevaluated values. For example:

create table rental_materialized

(cust_id int,

start_date as getdate()materialized,

last_change_dt datetime)

When a row is selected from the above table, the value of start_date

will be the datetime when the row was inserted and not the datetime

of the select statement.

Nondeterministic and Nonmaterialized Computed Columns

Nonmaterialized columns that are nondeterministic do not guarantee

repeatable results. For example:

create table Renting

(Cust_ID int,

Cust_Name varchar(30),

Formatted_Name compute format_name(Cust_Name),

Property_ID int,

Property_Name compute get_pname(Property_ID),

start_date compute today_date()materialized, Rent_due compute

rent_calculator(Property_ID, Cust_ID, start_date))

Rent_due is a virtual nondeterministic computed column. It calcu-

lates the current rent due based on the rent rate of the property, the

discount status of the customer, and the number of rental days.

select Cust_Name, Rent_due from Renting

where Cust_Name= 'NARESH ADURTY'

In this query, the column Rent_due returns different results on differ-

ent days. This column has a serial time property, whose value is a

function of the amount of time that passes between rent payments.

The nondeterministic property is useful here, but you must use it with

caution. For instance, if you accidentally defined start_date as a vir-

tual computed column and entered the same query, you would rent all

your properties for nothing: start_date would always be evaluated to

the current date, so the number of rental days is always 0. Likewise,

if you mistakenly define the nondeterministic computed column

Rent_due as a preevaluated column, either by declaring it

230 | Chapter 7: Computed Columns

Page 262: New.features.guide.to.Sybase.ase.15

materialized or by using it as an index key, you would also rent your

properties for nothing. It is evaluated only once, when the record is

inserted, and the number of rental days is 0.

Tip: Before creating a computed column, understand whether or not the

computed column has a deterministic property and the implications of

defining this column as materialized or nonmaterialized. A mistake in the

definition of a computed column can prove very costly as shown in the

above example.

Benefits of Using Computed Columns

Provide Shorthand and Indexing for an

Expression

Computed columns allow you to create a shorthand term for an

expression. For example, “Age” can be used for “getdate –

DateOfBirth.” The computed columns can be indexed as long as the

resulting datatype can be in an index. The datatypes that cannot be

indexed include text, image, Java class, and bit.

For more details on indexing, refer to Chapter 8, “Functional

Indexes.”

Composing and Decomposing Datatypes

Computed columns can be used to compose and decompose complex

datatypes. You can use computed columns either to make a complex

datatype from simpler elements (compose), or to extract one or more

elements from a complex datatype (decompose). Complex datatypes

are usually composed of individual elements or fragments. You can

define automatic decomposition or composition of complex datatypes

when you define the table. For example, suppose you want to store

XML “order” documents in a table, along with some relational ele-

ments: order_no, part_no, and customer. Using create table, you can

define an extraction with computed columns:

create table order(xml_doc image,

order_no compute xml_extract("order_no", xml_doc)materialized,

Chapter 7: Computed Columns | 231

Page 263: New.features.guide.to.Sybase.ase.15

part_no compute xml_extract ("part_no", xml_doc)materialized,

customer compute xml_extract("customer", xml_doc)materialized)

Each time you insert a new XML document into the table, the docu-

ment’s relational elements are automatically extracted into the

materialized columns. Or, to present the relational data in each row as

an XML document, specify mapping the relational data to an XML

document using a computed column in the table definition. For

example, define a table:

create table order

(order_no int,

part_no int,

quantity smallint,

customer varchar(50))

Later, to return an XML representation of the relational data in each

row, add a computed column using alter table:

alter table order

add order_xml compute order_xml(order_no, part_no, quantity,

customer)

Then use a select statement to return each row in XML format:

select order_xml from order

User-defined Sort Order

Computed columns can be used to transform data into different for-

mats — to customize data presentations for data retrieval. This is

called user-defined sort order. For example, the following query

returns results in the order of the server’s default character set and

sort order, usually ASCII alphabetical order:

select name, part_no, listPrice from parts_table order by name

You can use computed columns to present your query result in a

case-insensitive format, or you can use system sort orders other than

the default sort order. To transform data into a different format, use

either the built-in function sortkey or a user-defined sort order func-

tion. For example, you can add a computed column called

name_in_myorder with a user-defined function Xform_to_myorder():

alter table parts_table add name_in_myorder compute

Xform_to_myorder(name)materialized

232 | Chapter 7: Computed Columns

Page 264: New.features.guide.to.Sybase.ase.15

The following query then returns the result in the customized format:

select name, part_no, listPrice from parts_table order by

name_in_myorder

This approach allows you to materialize the transformed and ordered

data and create indexes on it. You can do the same thing using data

manipulation language (DML), specifying the user-defined function

in the select statement:

select name, part_no, listPrice from parts_table

order by Xform_to_myorder(name)

However, using the computed column approach allows you to mate-

rialize the transformed and ordered data. Because materialized

columns can be indexed, the query will have improved performance.

Tip: The ability to index computed columns comes in handy in a

decision support system (DSS) where applications frequently use

expressions and functions in queries and special user-defined ordering

is often required.

Rules and Properties of Computed Columns� The datatype of a computed column is automatically inferred

from its computed_column_expression.

� You can define triggers only on materialized computed columns;

they are not allowed in virtual computed columns.

� Computed columns cannot have default constraints.

� computed_column_expression can only reference columns in the

same table.

� You cannot use a virtual computed column in any constraints.

� You can use a materialized computed column as a key column in

an index or as part of a unique or primary constraint. However,

this can only be done if the computed column value is a deter-

ministic expression and if the resultant datatype is allowed in

index columns.

� You can constrain nullability only for materialized computed col-

umns. If you do not specify nullability, all computed columns are

nullable by default; virtual computed columns are always

nullable.

Chapter 7: Computed Columns | 233

Page 265: New.features.guide.to.Sybase.ase.15

� If a user-defined function in a computed column definition is

dropped or becomes invalid, any operations that call that function

fail.

� You cannot change a regular column into a computed column, or

a computed column into a regular column.

� You cannot drop or modify the base column referenced by a

computed column.

� When you add a new computed column without specifying

nullability, the default option is nullable.

� When adding a new materialized computed column, the com-

puted_column_expression is evaluated for each existing row in

the table, and the result is stored in the table.

� You can modify the entire definition of an existing computed

column. This is a quick way to drop the computed column and

add a new one with the same name. When doing this, keep in

mind that a modified column behaves like a new computed col-

umn. The defaults are nonmaterialized and nullable if these

options are not specified.

� You can add new computed columns and add or modify their

base columns at the same time.

� When you change a not-null, materialized computed column into

a virtual column, you must specify null in the modify clause.

� You cannot change a materialized computed column into a vir-

tual column if it has been used as an index key; you must first

drop the index.

� When you modify a nonmaterialized computed column to mate-

rialized, the computed_column_expression is evaluated for each

existing row in the table. The result is stored in the table.

� If you modify computed columns that are index keys, the index is

rebuilt.

� You cannot drop a computed column if it is used as an index key.

� You can modify the materialization property of an existing com-

puted column without changing other properties, such as the

expression that defines it.

234 | Chapter 7: Computed Columns

Page 266: New.features.guide.to.Sybase.ase.15

Sybase Enhancements to Support ComputedColumns

Create Table Syntax Change

create table [database.[owner].] table_name

(column_name{datatype

| {compute | as} computed_column_expression

[materialized | not materialized] }

� {compute | as} — Reserved keywords that can be used inter-

changeably to indicate that the column is a computed column.

� computed_column_expression — The expression or function

that defines a computed column. It can be a regular column

name, constant, function, global variable, or any combination,

connected by one or more operators. This expression cannot con-

tain local variables, aggregate functions, or other computed

columns. This expression cannot be a subquery, and it will be

verified for correctness. Columns and functions referenced must

exist, and parameters must match the function signature.

� materialized | not materialized — Reserved keywords that specify

whether the computed column is materialized or physically

stored in the table. If neither keyword is specified, a computed

column, by default, is not materialized (not physically stored in

the table).

Alter Table Syntax Change

alter table

add column_name {datatype | [{ [compute | as}

computed_column_expression

}...

| modify column_name {datatype [null | not null]

{materialized | not materialized} [null |not null] | {compute | as

computed_column_expression

[ materialized | not materialized]}...

� compute | as — Addition to add column, allowing you to add a

new computed column.

Chapter 7: Computed Columns | 235

Page 267: New.features.guide.to.Sybase.ase.15

� materialized | not materialized — Reserved keywords in the mod-

ify clause that specify whether the computed column is

materialized. By default, a computed column is not materialized.

You can also use this option to change the definitions of existing

virtual computed columns, making them materialized.

The following examples illustrate the use of alter table.

Add a materialized computed column to the Events table:

alter table Events

add month_number compute datepart(mm,actualStartTime) materialized

Create a functional index (see Chapter 8) on the month_number

column:

create index comp_idx on Events(month_number)

System Table Changes

The following are changes to the system table.

� syscolumns — Contains one row for each computed column and

function-based index key associated with a table. A new field,

computedcol, has been added to store the object ID of the com-

puted column definition.

A new status field, status3, has been added, containing a new

internal bit:

• Hex: 0x0001, Decimal 1 — A hidden computed column for a

function-based index key.

There are three new internal status bits in the status2 field:

• Hex: 0x00000010, Decimal 16 — The column is a computed

column.

• Hex: 0x00000020, Decimal 32 — The column is a material-

ized computed column.

• Hex: 0x00000040, Decimal 64 — The column is a computed

column in a view.

� syscomments — Stores the text of the computed column or func-

tion-based index key expressions.

236 | Chapter 7: Computed Columns

Page 268: New.features.guide.to.Sybase.ase.15

� sysconstraints — Contains one row for each computed column or

function-based index associated with a table. One new internal

status bit has been added to the status field:

• Hex: 0x0100, Decimal 256 — Indicates a computed column

object.

� sysindexes — Contains one row for each function-based index or

index created on a computed column. One new internal status bit

has been added to the status2 field:

• Hex: 0x8000, Decimal 32768 — Indicates that the index is a

function-based index.

� sysobjects — Contains one row for each computed column and

function-based index key object.

• Type field — A new type “C” has been added in the type

field when the object is a computed column.

• Status2 field — A new bit has been added to indicate that the

table contains one or more function-based indexes. Hex:

0x40000000

� sysprocedures — Stores a sequence tree for each computed col-

umn or function-based index definition in binary form.

Stored Procedure Changes

The following are changes to stored procedures.

� sp_checksource — Checks the existence of computed columns

source text.

� sp_help — Reports information on computed columns and func-

tion-based indexes.

� sp_helpindex — Reports information on computed columns

index and function-based indexes.

� sp_helptext — Displays the source text of computed columns or

function-based index definitions.

� sp_hidetext — Hides the text of computed columns and func-

tion-based index keys.

� sp_helpcomputedcolumn — A new stored procedure that reports

information on all the computed columns in a specified table.

Chapter 7: Computed Columns | 237

Page 269: New.features.guide.to.Sybase.ase.15

Following is the section of sp_help output that shows the changes

related to computed columns. The output has been edited to show

only the relevant changes.1> sp_help Employee

2> go

Name Owner Object_type Create_date

-------- ----- ----------- --------------------------

Employee dbo user table Mar 16 2005 4:19PM

(1 row affected)

Column_name Type Length Prec Scale Nulls Default_name Rule_name

Access_Rule_name Computed_Column_object Identity

-------------- -------- ----------- ---- ----- ----- ------------ ---------

formatted_name varchar 30 NULL NULL 1 NULL NULL

NULL Employee_format_832002964 (virtual) 0

date_of_birth datetime 8 NULL NULL 0 NULL NULL

NULL NULL 0

emp_age int 4 NULL NULL 1 NULL NULL

NULL Employee_emp_ag_848003021 (virtual) 0

Object has the following computed columns

Column_Name Property

-------------- --------

formatted_name virtual

Text

-------------------

AS upper(emp_name)

Column_Name Property

----------- --------

emp_age virtual

Text

--------------------------------------------

AS datediff(year, date_of_birth, getdate())

Following is the output of sp_helpcomputedcolumn:

1> sp_helpcomputedcolumn Employee

2> go

Object has the following computed columns

238 | Chapter 7: Computed Columns

Page 270: New.features.guide.to.Sybase.ase.15

Column_Name Property

-------------- --------

formatted_name virtual

Text

-------------------

AS upper(emp_name)

Column_Name Property

----------- --------

emp_age virtual

Text

--------------------------------------------

AS datediff(year, date_of_birth, getdate())

SummaryComputed columns make it easier and faster to access and manipu-

late data, and they help improve performance by providing the ability

to index an expression. They can be used to add clarity to the applica-

tion code because of their ability to compose and decompose

complex datatypes and to transform data into different formats.

Materialization and deterministic properties are important con-

cepts for computed columns. A materialized column is evaluated and

stored in the table when the base column is inserted or modified,

whereas the nonmaterialized column is evaluated at the time of data

retrieval.

A function has a deterministic property if it produces the same

output every time for a given set of inputs; otherwise, it is a

nondeterministic function. The date function is an example of a

nondeterministic function.

The computed column feature of ASE 15 not only helps reduce

the complexity of the system but also increases performance. When

this feature is used in conjunction with XML in the database, it opens

up new application designs.

Chapter 7: Computed Columns | 239

Page 271: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 272: New.features.guide.to.Sybase.ase.15

Chapter 8

Functional Indexes

This chapter focuses on computed column indexes and function-

based indexes. These features are an extension to the existing create

index command. Examples of code are provided to show how the

functionality of each feature can be best utilized. Particular attention

is paid to the following areas:

� Limitations of the current implementation of each feature in

ASE 15

� Positive and negative impacts to tempdb for each feature

� Updating optimizer statistics on the indexes

Computed column indexes and function-based indexes were imple-

mented to ease application development by increasing the flexibility

with which join criteria is defined and optimized within application

code. These features are important for decision support systems

(DSS) since indexes created using these features can aid in

cost-based optimization by providing the optimizer with resolved and

predetermined join predicates. Join predicates are expressions on

either side of the where operands.

Example:

where TableA.ColumnA = TableB.ColumnA

TableA.ColumnA � join predicate

TableB.ColumnA � join predicate

where TableA.ColumnA = TableB.ColumnB * 4

TableA.ColumnA � join predicate

TableB.ColumnB * 4 � join predicate

Chapter 8: Functional Indexes | 241

Page 273: New.features.guide.to.Sybase.ase.15

Both examples contain join predicates on either side of the “=” oper-

and. In the case of a computed column index, the join predicate

TableB.ColumnB * 4 would be replaced by its computed column

(i.e., TableB.ComputedColumnB is the same as TableB.ColumnB *

4), and this computed column would also have an index defined

using the computed column.

Because these types of indexes contain predetermined join predi-

cates, the optimizer does not have to include the costs associated with

resolving the join during execution.

Materialized and nonmaterialized data have to be of concern

when considering the use of either type of index. As will be noted

later in the chapter, the materialization state of the data that is being

considered for the index may determine the type of index and the

behavior of the index. The reader should pay close attention to the

materialization requirements of each type of index when deciding to

use either computed column indexes or function-based indexes.

Computed Column Index

Purpose

A computed column index is an index created on a computed col-

umn. This type of index is useful for derived data columns that are

frequently used in where conditions. In cases where the value of the

search argument has to be derived before the query can be resolved,

this type of index uses preresolved data for the search argument.

Unlike computed columns where the data can be evaluated at access

time, the index on a computed column is evaluated when the row is

inserted into the table. This allows the optimizer to utilize an index

instead of creating a temporary table to resolve the query. The end

result is faster data access.

The syntax for creating a computed column index is the same as

any other index created using an earlier version of Sybase ASE.

What is different is the use of a computed column to be listed as a

column-name on which the index can be built.

242 | Chapter 8: Functional Indexes

Page 274: New.features.guide.to.Sybase.ase.15

Syntax:

create [unique] [clustered | nonclustered]

index index_name

on [[database.]owner.]table_name

(column_name [asc | desc]

[, column_name [asc | desc]]...}

(column_expression [asc | desc]

[, column_expression [asc | desc]]...)

with { fillfactor = percent

, max_rows_per_page = num_rows

, reservepagegap = num_pages

, consumers = x

, ignore_dup_key

, sorted_data

, [ignore_dup_row | allow_dup_row]

, statistics using num_steps values }]

[on segment_name]

[local index [partition_name [on segment_name]

[,[partition_name [on segment_name]...]

The following example creates a clustered index on a computed

column:

create table Rental

(cust_id int,

RentalDate as getdate() materialized,

PropertyID int,

DaysOfRental compute datediff(dd,RentalDate,getdate()) materialized)

create clustered index rental_cc_idx

on dbo.rental

(DaysOfRental)

with allow_dup_row

Below are the showplan results from using a computed column

index. In Exhibit 1, the table referenced in the output was created

without an index on the computed column. A table scan was used to

resolve the query. Since a table scan was used, the number of logical

I/Os necessary to return 2,400 rows from this table, which contains

20,000 rows, was 90 reads. Exhibit 2 shows the create statement that

was used to define the computed column index. Exhibit 3 is the

resulting showplan that indicates the index was selected to resolve

the query. The number of I/Os in this showplan was 76. The differ-

ence in the number of reads is the result of scanning all of the index

pages and returning only the necessary data pages vs. scanning all of

Chapter 8: Functional Indexes | 243

Page 275: New.features.guide.to.Sybase.ase.15

the data pages. set showplan on and set statistics io on were used for

these examples.

Exhibit 1:

=======================================================

select count(*) from Rental where datediff(dd, RentalDate, getdate()) > 15

<== Without Index

=======================================================

QUERY PLAN FOR STATEMENT 1 (at line 1).

3 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|SCALAR AGGREGATE Operator

| Evaluate Ungrouped COUNT AGGREGATE

|

| |RESTRICT Operator

| |

| | |SCAN Operator

| | | FROM TABLE

| | | Rental

| | | Table Scan.

| | | Forward Scan.

| | | Positioning at start of table.

| | | Using I/O Size 16 Kbytes for data pages.

| | | With LRU Buffer Replacement Strategy for data pages.

-----------

2400

Table: Rental scan count 1, logical reads: (regular=90 apf=0 total=90),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

(1 row affected)

Exhibit 2:

create clustered index rental_idx

on dbo.rental (DaysOfRental)

with allow_dup_row

244 | Chapter 8: Functional Indexes

Page 276: New.features.guide.to.Sybase.ase.15

Exhibit 3:

==================================================================

select count(*) from Rental where DaysOfRental > 15 <== With Index

==================================================================

QUERY PLAN FOR STATEMENT 1 (at line 1).

STEP 1

The type of query is SET STATISTICS ON.

Total writes for this command: 0

QUERY PLAN FOR STATEMENT 1 (at line 1).

2 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|SCALAR AGGREGATE Operator

| Evaluate Ungrouped COUNT AGGREGATE

|

| |SCAN Operator

| | FROM TABLE

| | Rental

| | Using Clustered Index.

| | Index : rental_idx

| | Forward Scan.

| | Positioning by key.

| | Keys are:

| | DaysOfRental ASC

| | Using I/O Size 16 Kbytes for data pages.

| | With LRU Buffer Replacement Strategy for data pages.

-----------

2400

Table: Rental scan count 1, logical reads: (regular=76 apf=0 total=76),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

(1 row affected)

Chapter 8: Functional Indexes | 245

Page 277: New.features.guide.to.Sybase.ase.15

Rules and Properties of a Computed Column

Index

� The datatype of the computed column has to be a datatype that

can be indexed. Columns with a datatype of bit, text, or image

are not permitted in index creation.

� The computed column to be used does not need to be

materialized.

Warning! An index created on a column that was not declared material-

ized will be created using preevaluated data at the time the index is

created. See the section called “Feature Limitations” for more

information.

� A computed column used as an index key does not have to be

deterministic.

� The index can be composed of a combination of computed and

noncomputed columns.

� Stored procedures that use the table on which the index is created

will be recompiled. This is true for stored procedures that are

already in procedure cache.

What is the impact of a computed column that is also defined as a

“primary key” or a “unique constraint” as it relates to overhead, table

maintenance, and query optimization? Since the primary key and

unique constraint are constraints on columns that exist in a table, they

can only be defined on materialized data. Therefore, the only over-

head that occurs when utilizing either of these constraints is a result

of the materialization of the data during either an insert or update

command.

Because computed column indexes contain materialized data, a

computed column index can also be defined as clustered.

Feature Benefits

Computed column indexes are most beneficial in cases where the

value of a search argument must be resolved prior to the query results

being returned to the request. By resolving the value prior to query

optimization, the optimizer has the ability to evaluate query access

paths and costing options that utilize indexes. If the computed

246 | Chapter 8: Functional Indexes

Page 278: New.features.guide.to.Sybase.ase.15

column index did not exist, the only option available to the optimizer

is to create a worktable where the search criteria must be resolved

prior to the full resolution of the where clause.

Computed column indexes offer the database administrator a

method for indexing derived data that was previously unavailable. In

cases where the derived data would normally be in the selection crite-

ria, the optimizer would be able to determine that an index exists.

Ultimately, this would reduce the amount of I/O, disk, and memory

required to resolve the query. Since the selection can be executed

using the index, physical I/O necessary to read the data page into

memory would be replaced by physical I/O for index pages contain-

ing more information per page. In the case where the index was also

a covering index (an index where all of the columns specified in the

select and where clauses for a particular table are found in the index

definition), the data page would not be read into memory. As data is

read into memory and interrogated to determine if it meets the selec-

tion criteria, accepted data is stored in worktables. Since the data for

a computed column is on the index page, no worktable would be

required. With the reduction of physical and logical I/O, the direct

result of using a computed column index is a decrease in response

time, which is seen as an improvement in performance.

Example:

create table Rental

(cust_id int,

RentalDate as getdate() materialized,

PropertyID int,

DaysOfRental compute datediff(dd,RentalDate,getdate()) materialized)

create index CoveringIndex

on Rental (cust_id, DaysOfRental)

select cust_id

from Rental

where cust_id > 100

and DaysOfRental > 30

The type of query is SELECT.

ROOT:EMIT Operator

|SCAN Operator

| FROM TABLE

Chapter 8: Functional Indexes | 247

Page 279: New.features.guide.to.Sybase.ase.15

| Rental

| Index : CoveringIndex

| Forward Scan.

| Positioning by key.

| Index contains all needed columns. Base table will not be read.

| Keys are:

| cust_id ASC

| DaysOfRental ASC

| Using I/O Size 16 Kbytes for index leaf pages.

| With LRU Buffer Replacement Strategy for index leaf pages.

Table: Rental scan count 1, logical reads: (regular=12 apf=0 total=12),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

cust_id

-----------

(0 rows affected)

As can be seen in the example, the covering index CoveringIndex is

chosen since the columns in the select and where clauses are con-

tained within the index.

Feature Limitations

For a computed column index, there is a restriction that the data of

the computed column must be materialized at the time of the index

row creation. This is true even if the computed column was not

defined as materialized. This implies that if the value of the indexed

computed column changes, the index will also change. This, how-

ever, is not the case. The index row is created at the time of the

materialization of the data. When an update occurs to the computed

column, the materialized data in the index does not change.

Impacts to tempdb

Computed column indexes have no additional impact on the use of

tempdb during the index’s normal use. However, utilization of a

computed column index can decrease the tempdb space used for join

resolution. Computed column indexes would reduce the number and

size of temporary worktables.

248 | Chapter 8: Functional Indexes

Page 280: New.features.guide.to.Sybase.ase.15

Impact to Existing Application Code

In order to take advantage of computed column indexes, the existing

application code will need to be examined (either manually or via an

automated tool) to determine which components are candidates for

utilizing the new indexes. Testing code before and after the changes

will ensure that the expected results make it worthwhile to modify the

code. In some cases, the use of computed column indexes may not

have a significant positive impact. These cases would include:

� Tables that are always scanned because of a small number of

records

� Joins where the size of the computed column is significantly

larger than the components that comprise the computed column.

Exhibit 4 is an example of a computed column index where the

resulting size of the computed column was greater than the size

of the component parts. In the example, the datatype of getdate()

is datetime. However, since getdate() (8 bytes) is being con-

verted to a char(30) to be concatenated with the cust_id (8 bytes)

and PropertyID (8 bytes) columns, the resulting column will have

an internal size of 42 bytes, whereas the original fields only had a

size of 28 bytes.

� Joins where the index needs to utilize data that cannot be

materialized

Exhibit 4:

create table Rental

(cust_id int,

RentalDate as getdate() materialized,

PropertyID int,

DaysOfRental compute datediff(dd,RentalDate,getdate()) materialized,

ContractID compute convert(char(8),cust_id)+convert(char(8),

PropertyID)+convert(char(30), getdate()) materialized

)

Chapter 8: Functional Indexes | 249

Page 281: New.features.guide.to.Sybase.ase.15

Determining When to Use a Computed Column

Index

When determining possible candidate code that would benefit from a

computed column index, the primary concern will be the use of phys-

ical I/O. Joins tend to utilize a large amount of I/O in the form of

temporary worktables and table scans. The worktables are used to

quantify the rows where calculations have to be performed on the

search argument before the join can be resolved. Candidate code

would consist of the following criteria:

� A search argument requires a calculation to be performed

and one or more of the following:

� Joins where a large table is involved

� Search arguments where an index could cover the join criteria

� Joins where several temporary worktables are involved

Once the candidate code has been identified and the computed col-

umn index created, the where criteria must be modified to include the

computed column in order for the index to be utilized. If the com-

puted column is not specifically referenced, the index will not be

used simply because you created an index on the expression. In order

to utilize an expression, see the following section called “Function-

based Index.”

Example:

create table Rental

(cust_id int,

RentalDate as getdate() materialized,

PropertyID int,

DaysOfRental compute datediff(dd,RentalDate,getdate()) materialized)

create clustered index rental_idx

on dbo.rental (DaysOfRental)

with allow_dup_row

select count(*) from Rental where datediff(dd,RentalDate,getdate()) > 15

select count(*) from Rental where DaysOfRental > 15

The first select statement is not aware that an index exists on the cri-

teria specified in the where clause. The second statement specifies

the computed column on which the index had been built. In the

250 | Chapter 8: Functional Indexes

Page 282: New.features.guide.to.Sybase.ase.15

second statement, the optimizer would be able to choose the index to

resolve the search argument.

Optimizer Statistics

Just as any other column that comprises an index, computed columns

that are used to create an index need to have update statistics per-

formed on them on a regular basis. However, since a computed

column index is based on materialized data, update statistics needs to

be executed when certain criteria are met: 1) a large number of rows

affecting the computed column change, or 2) a large number of

inserts have occurred since the previous update statistics. It is up to

the database administrator to know when the update statistics needs

to be executed.

Function-based Index

Purpose

The function-based index has been an industry feature for several

years in other vendors’ products. The concept of a function-based

index was first introduced in 1994 as a performance optimization for

object-oriented databases. The concept was based on the utilization

of a function in order to improve join performance. In ASE 15, the

feature is implemented to compete with other products on the market

that currently have this feature.

A function-based index should be used where preresolved search

criteria can benefit a slow or long-running query. The function-based

index is an index based on a function of the search criteria. The

optimizer would be able to utilize an index instead of resolving the

search criteria at execution time.

Syntax:

create [unique] [clustered | nonclustered]

index index_name

on [[database.]owner.]table_name

(column_name [asc | desc]

[, column_name [asc | desc]]...}

(column_expression [asc | desc]

Chapter 8: Functional Indexes | 251

Page 283: New.features.guide.to.Sybase.ase.15

[, column_expression [asc | desc]]...)

with { fillfactor = percent

, max_rows_per_page = num_rows

, reservepagegap = num_pages

, consumers = x

, ignore_dup_key

, sorted_data

, [ignore_dup_row | allow_dup_row]

, statistics using num_steps values }]

[on segment_name]

[local index [partition_name [on segment_name]

[,[partition_name [on segment_name]...]

The following example creates a function-based index:

create index rental_fbi_idx

on dbo.rental

(PropertyID*RentalRate/3)

Rules and Properties of a Function-based Index

� The resulting datatype of the function-based index has to be an

index-supported datatype. Datatypes of bit, text, image, or Java

class are not allowed in the index creation.

� A computed column to be used to create a function-based index

has to be materialized. Function-based indexes cannot be created

on nonmaterialized data.

� A resulting index key must be deterministic. See the discussion

on the deterministic property in the following section called

“Feature Limitations” for more details.

� The column expression cannot contain a subquery, local variable,

aggregate function, or another computed column. It must contain

at least one base column.

� The index can be composed of a combination of computed and

noncomputed columns. It can have multiple computed column

index keys.

� Stored procedures that use the table on which the index is created

will be recompiled. This is true for stored procedures that are

already in the procedure cache. This behavior is new to ASE 15.

� An index will become invalid if a user-based function upon

which it was based is dropped or becomes invalid.

252 | Chapter 8: Functional Indexes

Page 284: New.features.guide.to.Sybase.ase.15

� The index will only be recognized as long as the expression for

which it was created remains the same as the index. If the expres-

sion changes, the optimizer will no longer be able to correlate the

expression with the index.

� The create index option with sorted_data cannot be used when

creating a function-based index.

� The value of the function-based keys will automatically be

updated by ASE when an insert, delete, or update operation is

performed against the base columns. This is a new behavior of

ASE 15.

Feature Benefits

Function-based indexes, like computed column indexes, are

preresolved data. Given the following search criteria:

where customerID > 100

and DaysOfRental > 90

a function-based index can be created that can be utilized by the

optimizer as if the search criteria were based simply upon another

column in the table.

The following example creates a function-based index:

create table Rental

(cust_id int

, RentalDate datetime

, PropertyID int

, RentalRate int

, DaysOfRental compute datediff(dd,RentalDate,getdate()) materialized

, StartOfRental as getdate() materialized

)

create index rental_fbi_idx

on dbo.rental

(PropertyID*RentalRate/3)

Following are the showplan results from using a function-based

index. In Exhibit 5, no index had been created on the function. There-

fore, a table scan was used to resolve the query. Since a table scan

was used, the number of logical I/Os necessary to return 2,800 rows

from this table, which contains 20,000 rows, was 63 reads. Exhibit 6

shows the create statement that was used to define the function-based

Chapter 8: Functional Indexes | 253

Page 285: New.features.guide.to.Sybase.ase.15

index. Exhibit 7 is the resulting showplan that indicates the index was

selected to resolve the query. The number of I/Os in this showplan

was 25. The difference in the number of reads is the result of scan-

ning only the index pages vs. scanning all of the data pages. It should

be noted that this index was considered a covering index because the

columns defined in the function, and the subsequent index, were the

only columns specified in the SQL. Since the index was considered a

covered index, the resulting data was read from the index pages

instead of index and data pages. set showplan on and set statistics io

on were used for these examples.

Exhibit 5:

===========================================================================

select count(*) from Rental where PropertyID*RentalRate/3 > 180 <== Without

Index

===========================================================================

QUERY PLAN FOR STATEMENT 1 (at line 1).

3 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|SCALAR AGGREGATE Operator

| Evaluate Ungrouped COUNT AGGREGATE

|

| |RESTRICT Operator

| |

| | |SCAN Operator

| | | FROM TABLE

| | | Rental

| | | Table Scan.

| | | Forward Scan.

| | | Positioning at start of table.

| | | Using I/O Size 16 Kbytes for data pages.

| | | With LRU Buffer Replacement Strategy for data pages.

-----------

2800

254 | Chapter 8: Functional Indexes

Page 286: New.features.guide.to.Sybase.ase.15

Table: Rental scan count 1, logical reads: (regular=63 apf=0 total=63),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

Exhibit 6:

====================================================================

create index rental_fbi_idx on dbo.rental (PropertyID*RentalRate/3)

====================================================================

Exhibit 7:

========================================================================

select count(*) from Rental where PropertyID*RentalRate/3 > 180 <== With

Index

========================================================================

QUERY PLAN FOR STATEMENT 1 (at line 1).

2 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|SCALAR AGGREGATE Operator

| Evaluate Ungrouped COUNT AGGREGATE

|

| |SCAN Operator

| | FROM TABLE

| | Rental

| | Index : rental_fbi_idx

| | Forward Scan.

| | Positioning by key.

| | Index contains all needed columns. Base table will not be read.

| | Keys are:

| | sybfi2_1 ASC

| | Using I/O Size 16 Kbytes for index leaf pages.

| | With LRU Buffer Replacement Strategy for index leaf pages.

-----------

3000

Chapter 8: Functional Indexes | 255

Page 287: New.features.guide.to.Sybase.ase.15

Table: Rental scan count 1, logical reads: (regular=25 apf=0 total=25),

physical reads: (regular=0 apf=0 total=0), apf IOs used=0

Total writes for this command: 0

(1 row affected)

Feature Limitations

For a function-based index there is a restriction that the functions that

are used to create the index must return the same result whenever it is

referenced. This property is also called deterministic. For example,

the expression “UnitPrice * QuantityOrdered” will always provide

the same answer when queried. However, getdate() returns a differ-

ent result every time it is referenced. This implies that if the

underlying data value of the function-based index changes, the results

may be inconsistent.

Function-based indexes cannot contain subqueries, local vari-

ables, or aggregate functions. Each of these criteria is considered to

be nondeterministic.

Probably the most important limitation or concern you might

think that you will have involves index maintenance. This is because

the values on which the index is created are stored in the index pages

at index creation. Sybase has addressed this concern for you and

therefore it is not an issue. If the values of the base components on

which the index is based change, the index will be updated to reflect

the base data change. For all insert, delete, and update operations,

ASE automatically updates the function-based index without any user

intervention. This behavior is new to ASE 15.

Function-based indexes cannot be clustered. Although this may

be considered for a future release of ASE, it is a limitation in the cur-

rent release.

Impacts to tempdb

Function-based indexes have a beneficial effect on tempdb usage.

Since the search criteria are predetermined, worktables are not neces-

sary for data evaluation in complex queries. The worktables were

previously used to hold temporary result sets while other data was

being read. The data would then be compared, sorted, and/or merged

to return the final result set. The function-based index avoids the

256 | Chapter 8: Functional Indexes

Page 288: New.features.guide.to.Sybase.ase.15

worktable overhead by using an index with preresolved search

criteria.

Impact to Existing Application Code

Implementation of a function-based index has no impact on currently

existing application code. From a code change or stored procedure

standpoint, ASE 15 no longer has the restriction of recompilation for

index use. With ASE 15, when an index is created, stored procedures

that utilize the affected table are automatically recompiled in order to

determine if the new index is applicable to the code. This behavior is

optimistic in assuming that the changes will have a positive effect on

existing code and therefore should be considered when the new index

is created.

Determining the Use of a Function-based Index

In order to determine if a function-based index will be useful, the

database administrator will need to evaluate the existing stored proce-

dures, looking for candidate code. Initial candidate code will be:

� Stored procedures used for decision support systems. These

stored procedures tend to be I/O and CPU intensive.

� Stored procedures that are used in batch processing

� Stored procedures that are used for predetermining data that will

be loaded into a data warehouse

Each of these types of candidates is likely to include function string

processing in their where clauses.

Prior to creating an index, you should gather baseline data on the

number of I/Os involved, the tables involved, and the CPU required

to resolve the query. If the where clause has functions in it, these

should be considered as individual indexes or composite indexes.

Also keep in mind the likelihood that a covering index might be con-

sidered if the result set from the where clause returns few columns

from the base table. Running statistics io will help determine the use-

fulness of covering the additional columns vs. retrieving the data

directly.

Chapter 8: Functional Indexes | 257

Page 289: New.features.guide.to.Sybase.ase.15

Optimizer Statistics

As with any other type of index, the statistics for a function-based

index need to be initialized following the creation of the index. The

database administrator will be responsible for determining the

requirements for updating the statistics.

Behind the ScenesFor update/insert to rows where a function-based index exists, ASE

automatically updates the index key. Internally, a function-based

index key is treated as a materialized computed column, but one that

is hidden (i.e., you won’t see it in the system tables). For

update/insert, the expression is evaluated and the result is stored in

the data row, then the index page gets updated — this process is the

same as for regular columns. Consequently, there is no extra conten-

tion issue on the index page or data page as a result of the

function-based index. There may, of course, already be other conten-

tion issues, but the function index shouldn’t amplify this.

The main resources involved are mainly extra CPU time for eval-

uating the expressions, and additional disk space for storing the

hidden column.

Getting Index InformationInformation about an index is stored in several system tables. The

information can easily be obtained by running sp_help or

sp_helpindex on the base table. The following list of system tables

briefly identifies the changes specific to the computed column index

and function-based index. For more detail, see Chapter 2, “System

Maintenance Improvements.”

� syscolumns — The table will now contain one row for each func-

tion-based index key defined for the base table.

� syscomments — The table will now contain one row that con-

tains the text for each function-based index key expression.

� sysconstraints — The table will now contain one row that con-

tains the text for each function-based index associated with the

base table.

258 | Chapter 8: Functional Indexes

Page 290: New.features.guide.to.Sybase.ase.15

� sysindexes — The table will now contain one row for each com-

puted column index or function-based index associated with the

base table. An internal status of hex: 0x8000, decimal 32768 has

been added to the domain for the status2 field to indicate that the

index is a function-based index.

SummaryThe main objective of functional indexes is to provide the optimizer

with pre-processed data that can be used in where clauses. Both com-

puted column and function-based indexes are created by the database

administrator after determining that certain long-running queries can

benefit from one or both types of indexes. Although there are several

restrictions when defining and maintaining these types of indexes,

their benefit to DSS application and batch processing can quickly

compensate for the minimal overhead required to support them.

Chapter 8: Functional Indexes | 259

Page 291: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 292: New.features.guide.to.Sybase.ase.15

Chapter 9

Capturing Query ProcessingMetrics

New to ASE 15, Sybase introduces a mechanism to capture many

existing query metrics, in addition to some new metrics. In this chap-

ter, we explore how to capture and analyze the Query Processing

Metrics (QP Metrics). The chapter explores how to capture QP Met-

rics as well as how to query the QP Metrics view. We discuss how to

analyze the query metrics data in order to detect the most frequently

executed queries and the most expensive queries, and how to monitor

the performance of the same query over time. Suggestions are also

made on how to make performance degradations a little more obvi-

ous when using the QP Metrics capture.

Alternatives to Query Processing MetricsBefore going further into the topic of query metrics, Let’s begin by

looking at the possible alternatives for this process in order for the

database administrator to form a basis for selection of the best query

monitor tool offered by ASE 15.

� MDA tables — Provides an assortment of low-level server, pro-

cess, and query-level information.

� Graphical Plan Viewer — Displays graphical query plan, esti-

mated physical and logical I/O, and estimated and actual

rowcounts for individual query.

� set statistics plancost on — Displays query plan and estimated

physical and logical I/O for individual queries in text format.

Chapter 9: Capturing Query Processing Metrics | 261

Page 293: New.features.guide.to.Sybase.ase.15

� set plan for show_execio_xml — Displays query text, query

plan, and estimated physical and logical I/O for individual query

in XML format.

� dbcc traceon 311 — Displays estimated physical and logical I/O

for the individual query.

� statistics time and statistics io — Displays execution and elapsed

time plus physical and logical I/O. In this chapter, a comparison

is performed between the metrics and the data captured by statis-

tics time and statistics io.

The query metrics contain an assortment of the features offered by

many of the above alternatives but, of course, not every feature of the

alternatives. Before jumping into query metrics, ensure you have

selected the correct tool for the targeted analysis as one or more of

the alternatives may better suit your analysis needs.

Introduction to Query Processing MetricsSo what exactly are the Query Processing Metrics? The Query Pro-

cessing Metrics, or QP Metrics, are query plan level metrics that

describe the performance characteristics of ad-hoc SQL and stored

procedures. Query Processing Metrics is a new feature introduced for

Sybase ASE 15. While the Query Processing Metrics is a new ASE

15 feature, the infrastructure for this process is based on the ability to

capture abstract query plans and the accompanying query text. This

infrastructure was introduced in ASE version 12.

The QP Metrics subsystem was added to ASE 15 in order to pro-

vide the database administrator with a relatively easy method to track

query performance metrics. With these performance metrics, the

database administrator gains a tool to help identify the frequency of

query execution. Additionally, QP Metrics can assist in the identifica-

tion of performance bottlenecks in ASE 15 where opportunity exists

to improve server performance.

The QP Metrics are maintained at the database level. This means

each database independently holds a QP Metrics view. Each view is

materialized from the sysqueryplans table, which exists independ-

ently in all databases. As the QP Metrics data is an extension of

abstract plan capture, the QP Metrics data and query capture

process will not cause conflicts on systems that are actively

262 | Chapter 9: Capturing Query Processing Metrics

Page 294: New.features.guide.to.Sybase.ase.15

monitoring performance with either the MDA tables or the

sp_sysmon system procedure.

Contents of sysquerymetricsAs mentioned earlier, sysquerymetrics is a view, not a table. This

view is comprised of two instances of the sysqueryplans table from

within the same database, joined via a self join. For those of us who

enjoy complex view syntax, below is the view creation syntax, fol-

lowed by a column-level description of the sysquerymetrics view:

create view sysquerymetrics

(uid, gid, hashkey, id, sequence, exec_min, exec_max, exec_avg, elap_min,

elap_max, elap_avg, lio_min, lio_max, lio_avg, pio_min, pio_max,

pio_avg, cnt, abort_cnt, qtext)

as select a.uid, -a.gid, a.hashkey, a.id, a.sequence,

convert(int, substring(b.text, charindex('e1', b.text) + 3,

charindex('e2', b.text) - charindex('e1', b.text) - 4)),

convert(int, substring(b.text, charindex('e2', b.text) + 3,

charindex('e3', b.text) - charindex('e2', b.text) - 4)),

convert(int, substring(b.text, charindex('e3', b.text) + 3,

charindex('t1', b.text) - charindex('e3', b.text) - 4)),

convert(int, substring(b.text, charindex('t1', b.text) + 3,

charindex('t2', b.text) - charindex('t1', b.text) - 4)),

convert(int, substring(b.text, charindex('t2', b.text) + 3,

charindex('t3', b.text) - charindex('t2', b.text) - 4)),

convert(int, substring(b.text, charindex('t3', b.text) + 3,

charindex('l1', b.text) - charindex('t3', b.text) - 4)),

convert(int, substring(b.text, charindex('l1', b.text) + 3,

charindex('l2', b.text) - charindex('l1', b.text) - 4))

convert(int, substring(b.text, charindex('l2', b.text) + 3,

charindex('l3', b.text) - charindex('l2', b.text) - 4)),

convert(int, substring(b.text, charindex('l3', b.text) + 3,

charindex('p1', b.text) - charindex('l3', b.text) - 4)),

convert(int, substring(b.text, charindex('p1', b.text) + 3,

charindex('p2', b.text) - charindex('p1', b.text) - 4)),

convert(int, substring(b.text, charindex('p2', b.text) + 3,

charindex('p3', b.text) - charindex('p2', b.text) - 4)),

convert(int, substring(b.text, charindex('p3', b.text) + 3,

charindex('c', b.text) - charindex('p3', b.text) - 4)),

convert(int, substring(b.text, charindex('c', b.text) + 2,

charindex('ac', b.text) - charindex('c', b.text) - 3)),

Chapter 9: Capturing Query Processing Metrics | 263

Page 295: New.features.guide.to.Sybase.ase.15

convert(int, substring(b.text, charindex('ac', b.text) + 3,

char_length(b.text) - charindex('ac', b.text) - 2)),

a.text

from sysqueryplans a,

sysqueryplans b

where (a.type = 10)

and (b.type =1000)

and (a.id = b.id)

and a.uid = b.uid

and a.gid = b.gid

Contents of the sysquerymetrics View

Column Description Datatype

uid User ID int

gid Group ID int

hashkey Unique hashkey int

id Unique ID int

sequence Number used to link the text of queries with text

larger than the varchar(255) of the qtext column

smallint

exec_min Minimum execution time int

exec_max Maximum execution time int

exec_avg Average execution time int

elap_min Minimum elapsed time int

elap_max Maximum elapsed time int

elap_avg Average elapsed time int

lio_min Minimum logical I/O int

lio_max Maximum logical I/O int

lio_avg Average logical I/O int

pio_min Minimum physical I/O int

pio_max Maximum physical I/O int

pio_avg Average physical I/O int

cnt Execution count of the query or procedure int

abort_cnt Count of query aborts by the resource governor int

qtext Text of the query varchar

264 | Chapter 9: Capturing Query Processing Metrics

Page 296: New.features.guide.to.Sybase.ase.15

How to Enable QP Metrics CaptureThe activation and capture of Query Processing Metrics can be

enabled at both the session and server levels. At the session level,

enable the capture of the QP Metrics with the set command:

set metrics_capture on

go

The activation and capture of QP Metrics at the server level is

enabled as a server configuration parameter with the sp_configure

system procedure:

sp_configure "enable metrics capture", 1

go

Contrary to intuition, the server-wide configuration parameter of

"enable metrics capture" is independent of the session-level set

metrics_capture on command. Enabling the capture of query plans at

the session level is possible while the server-wide configuration

parameter for QP Metrics capture is disabled. Additionally, setting

the session-level metrics capture to “off” will only turn off session-

level settings for QP Metrics data capture. If the server is configured

to capture metrics, the server setting will override the session setting

in this case.

When metrics_capture is enabled from a session level, the met-

rics capture will take place for all DML issued by the session. The

metrics capture will not become enabled for all identical logins. For

example, if one instance of “user1” is logged in with metrics capture

enabled at the session level, a second login of “user1” will not log

query metrics, unless that session also has metrics capture enabled.

For the session that does have query metrics enabled, metrics will be

captured regardless of the database used. However, there is one

important point to note. Similar to the behavior when the server-level

QP Metrics capture is enabled, the session where QP Metrics capture

is enabled at the session level will direct the captured SQL to the

database where the DML call originates. We will later explain this

concept in the section called “Accessing Captured Plans,” which dis-

cusses where SQL will be stored by the QP Metrics process.

Note: If the enable metrics configuration parameter is reset to 0 with

sp_configure after any metrics are already captured, the QP Metrics data

will remain until removed with the sp_metrics procedure.

Chapter 9: Capturing Query Processing Metrics | 265

Page 297: New.features.guide.to.Sybase.ase.15

Captured Information ExploredThe SQL syntax executed as ad-hoc or batch SQL is automatically

captured and immediately available via the sysquerymetrics view

once the capture process is enabled. For stored procedures, additional

steps are necessary to flush stored procedure SQL and make it avail-

able through the sysquerymetrics view.

Stored Procedures

Unlike the SQL generated by batch SQL statements, the SQL gener-

ated by stored procedures is not automatically flushed from memory

by the QP Metrics process. In order to flush the QP Metrics informa-

tion from stored procedures, it is necessary to manually flush the

plans and metric information from memory into the tables that under-

lie the sysquerymetrics view. To flush the QP Metrics data relevant

to stored procedure execution, execute the sp_metrics system proce-

dure with the flush command:

sp_metrics "flush"

After the sp_metrics are flushed, the stored procedure syntax cap-

tured into the sysquerymetrics view will not contain the exec

statement nor the values passed into the parameters for the stored

procedure. Rather, the SQL statement from within the stored proce-

dure will be present within the sysquerymetrics view. For the

parameters, you will see a local variable within the “qtext” section of

the sysquerymetrics view.

Here is a quick example of what to expect in the qtext column of

the sysquerymetrics view for a stored procedure where parameters

are utilized:

select qtext from sysquerymetrics

go

qtext

-------

update Rental set tenant_name = @new_name where tenant_name =

@old_name

select @tenant_name = @new_name from Rental where tenant_name =

@new_name

266 | Chapter 9: Capturing Query Processing Metrics

Page 298: New.features.guide.to.Sybase.ase.15

The stored procedure input parameter, @new_name, is recorded by

the QP Metrics capture process as opposed to the value passed to the

parameter. Therefore, sysquerymetrics will continue to aggregate to

the same ID in the sysquerymetrics view for the same stored proce-

dure’s SQL, regardless of the uniqueness of the input parameters.

Going further into how metrics will be captured from stored

procedures, the following example stored procedure is employed:

create procedure querymetrics_demo @authors int

as

/*************************************************************************

** Name: querymetrics_demo

**

** Purpose: Stored Procedure to demonstrate how stored procedure

** text is captured by QP Metrics.

**

** Parameters: Input: @authors - unique authorID

**

** Output Params: NONE.

**

** Example Use: exec @return = querymetrics_demo 10000

**

** Residing Database: userDatabase

**

**************************************************************************/

declare @error int,

@message varchar(255),

@procedure varchar(30),

@return int

select identification, code, status, effectiveDate

from authors

where code = @authors

select @error = @@error

if (@error != 0)

begin

select @message = @procedure + ": returned error " +

convert(varchar(6),@error) + ", selecting author information."

raiserror 99999 @message

select @return = 2

return @return

end

go

Chapter 9: Capturing Query Processing Metrics | 267

Page 299: New.features.guide.to.Sybase.ase.15

Now, the metrics capture is enabled at the session level:

set metrics_capture on

The stored procedure is executed:

exec querymetrics_demo 30000

Metrics for stored procedures are then flushed to disk:

sp_metrics "flush"

go

The following chart indicates which portions of the stored procedure

will be contained in the QP Metrics repository after the metrics are

flushed.

Text Captured

create procedure querymetrics_demo @authors int Nas N/************************************************************************ N** Name: querymetrics_demo N** N** Purpose: Stored Procedure to demonstrate how stored procedure N** text is captured by QP Metrics. N** N** Parameters: Input: @authors - unique authorID N** N** Output Params: NONE. N** N** Example Use: exec @return = querymetrics_demo 10000 N** N** Residing Database: userDatabase N** N**********************************************************************************/ Ndeclare @error int, N

@message varchar(255), N@procedure varchar(30), N@return int N

Nselect identification, code, status, effectiveDate Y

from authors Ywhere code = @authors Y

Nselect @error = @@error Nif (@error != 0) Nbegin N

select @message = @procedure + ": returned error " + Nconvert(varchar(6),@error) + ", selecting author information." N

raiserror 99999 @message Nselect @return = 2 Nreturn @return N

end NN

go N

268 | Chapter 9: Capturing Query Processing Metrics

Page 300: New.features.guide.to.Sybase.ase.15

As demonstrated, only the query text from the portion of the stored

procedure that is executed and that references a table is captured.

Neither the comments nor the variable assignments are captured.

Additionally, if the error checking portion of the code were executed,

the text from the error checking section would not be captured as the

error checking in this example does not reference a table or view.

Now consider the same stored procedure, but hide the stored pro-

cedure text with the sp_hidetext command.

sp_hidetext "querymetrics_demo"

go

set metrics_capture on

go

exec querymetrics_demo 30000

go

sp_metrics "flush"

go

Text Captured

create procedure querymetrics_demo @authors int Nas N/************************************************************************ N** Name: querymetrics_demo N** N** Purpose: Stored Procedure to demonstrate how stored procedure N** text is captured by QP Metrics. N** N** Parameters: Input: @authors - unique authorID N** N** Output Params: NONE. N** N** Example Use: exec @return = querymetrics_demo 10000 N** N** Residing Database: userDatabase N** N**********************************************************************************/ Ndeclare @error int, N

@message varchar(255), N@procedure varchar(30), N@return int N

Nselect identification, code, status, effectiveDate Y

from authors Ywhere code = @authors Y

Nselect @error = @@error Nif (@error != 0) Nbegin N

select @message = @procedure + ": returned error " + Nconvert(varchar(6),@error) + ", selecting author information." N

raiserror 99999 @message Nselect @return = 2 N

Chapter 9: Capturing Query Processing Metrics | 269

Page 301: New.features.guide.to.Sybase.ase.15

Text Captured

return @return Nend N

N

go N

Note the captured QP Metrics information is the same as when the

source code is not hidden, despite the execution of sp_hidetext upon

the called stored procedure. As sp_hidetext encrypts the source code

for database objects, the QP Metrics view can be used as a source to

extract the SQL from a hidden object.

Triggers and Views

It should be noted how the QP Metrics capture process handles the

SQL generated from within views and triggers. SQL executed against

a view is treated in the same manner as SQL executed against a table.

However, the SQL call used to materialize and create the view is not

captured as part of the QP Metrics data; only the DML is captured.

As for triggers, any trigger that fires as a result of inserts, updates, or

deletes on a table does not generate additional rows in the

sysquerymetrics view in any database.

Execute Immediate

Earlier we indicated that for stored procedures, parameters and vari-

ables utilized within the procedure will not be displayed. We also

mentioned that all SQL generated within a stored procedure must be

flushed with the sp_metrics system procedure before the data will

appear in the sysquerymetrics view. There is one exception, however,

in that SQL generated within a stored procedure with the “execute

immediate” command will materialize into the QP Metrics data with

the values for the parameters displayed. Additionally, stored proce-

dures with the “execute immediate” command will not need to

“flush” the SQL from memory in order to materialize the data into

the sysquerymetrics view. The following stored procedure example

has embedded comments to demonstrate which SQL statements will

automatically materialize into the sysquerymetrics view and which

will require the sp_metrics flush command:

270 | Chapter 9: Capturing Query Processing Metrics

Page 302: New.features.guide.to.Sybase.ase.15

create procedure brian_test (@a varchar(10))

as

while (convert(int,@a) > 1)

begin

-- Flush NOT Required:

exec ('select ' + @a + ' from Rental ' + ' where 1 = 2')

-- Will not appear in the sysquerymetrics data since a table is not

accessed:

select @a = convert (varchar(10), convert(int,@a) -1)

-- Flush Required:

select "hello non immediate" from Rental where 1 = 2

-- Flush NOT Required:

exec ("select 'TESTING' from Rental where 1 = 2")

end

go

Accessing Captured PlansEach individual database holds its own copy of the sysquerymetrics

view. This includes the system databases of model, master, tempdb,

sybsystemprocs, sybsystemdb, and even sybsecurity. Given that each

database holds a singular copy of the sysquerymetrics view, this can

be advantageous for some and cumbersome for others. For database

administrators who need to independently monitor the queries within

one database, the separation of queries into the individual databases

may be appreciated, provided your processes do not context switch

between databases. For applications that often switch context

between multiple databases and utilize SQL that references tables

from two or more databases at a time, it may be cumbersome to mon-

itor and track the use of SQL on your server. In this case, you may

want to evaluate the usage of the MDA tables and possibly the

Sybase auditing feature in order to track and monitor the use of SQL

at the server level.

The concept of database origination as the storage point for query

metric information is a very important point to remember with the

QP Metrics process. Do not assume the QP Metrics information will

Chapter 9: Capturing Query Processing Metrics | 271

Page 303: New.features.guide.to.Sybase.ase.15

populate into the database targeted by a DML statement. To illustrate

this point, consider the following examples.

First, clear the metrics from the user database and the tempdb

database with the sp_metrics procedure:

use properties

go

sp_metrics "drop", "1"

go

use tempdb

go

sp_metrics "drop", "1"

go

From the tempdb database, issue a select against a table in another

database:

use tempdb

go

select "Total" = count(*) from Properties..Rental

go

select "tempdb", cnt, qtext from tempdb..sysquerymetrics

go

select "properties", cnt, qtext from Properties..sysquerymetrics

go

Results:

Total

8848

Cnt qtext

Tempdb 1 select "Total" = count(*) from Properties..Rental

cnt qtext

properties 1 delete from sysqueryplans where gid = - @gid

Note from the results above, the qtext column in the tempdb database

contains the query text issued against the Rental table in the Prop-

erties database. It is important to note how ASE 15 stores the SQL

call against a table in the Properties database into the calling data-

base’s sysquerymetrics view.

272 | Chapter 9: Capturing Query Processing Metrics

Page 304: New.features.guide.to.Sybase.ase.15

Note: The concept of QP Metrics information captured to the database

from which queries are issued is very important to recognize, especially

for ASE installations where user logins are directed to default databases.

Consider a scenario where 500 users of an ASE database are directed to

the tempdb database as their default database upon login to ASE. Any

access to other databases that is not proceeded by a use database com-

mand will cause all QP Metrics information for all 500 users to be

captured into the tempdb’s QP Metrics repository.

How Is the QP Metrics Information Useful?The QP Metrics information can answer many questions about the

SQL issued on your server, including information regarding the I/O,

elapsed time, and frequency of execution. In the following examples,

several useful queries against the sysquerymetrics view are

demonstrated.

Find the query that has performed the most physical I/O in the

Properties database:

select qtext, pio_max

from Properties..sysquerymetrics

where pio_max = (select max(pio_max)

from Properties..sysquerymetrics)

go

Output:

qtext pio_max

select count(*) from PaymentHistory 1371305

Find the query most often executed in the Properties database:

select qtext, cnt

from Properties..sysquerymetrics

where cnt = (select max(cnt)

from Properties..sysquerymetrics)

Output:

qtext

cnt

---------------------------------------------------------------

select marketID, count(*) from Properties..Geography group by marketID

order by 2 desc

3798

Chapter 9: Capturing Query Processing Metrics | 273

Page 305: New.features.guide.to.Sybase.ase.15

Calculate the difference, as a percentage, between the average

elapsed time and the maximum elapsed time for all queries in a data-

base. (Note: The id column is used here for ease of display. It is more

useful to display the qtext with this statement.)

select id,

elap_max,

elap_avg,

"Percent Over Average" =

(convert(numeric(7,2),elap_max) -

convert(numeric(7,2),elap_avg)) /

convert(numeric(7,2),elap_avg )

* 100 -- Display as percent

from Properties..sysquerymetrics

where elap_avg > 0

order by 4 desc

Output of top 10 rows from the above query:

id elap_max elap_avg Percent Over Average

----------- ----------- ----------- --------------------------------

1958298036 5173 32 16065.6250000000

918294331 200 45 344.4444444400

1910297865 3 1 200.0000000000

1878297751 60 23 160.8695652100

1366295927 26 13 100.0000000000

1286295642 26 15 73.3333333300

1302295699 20 12 66.6666666600

1350295870 60 37 62.1621621600

1398296041 16 10 60.0000000000

1894297808 33 22 50.0000000000

Note: Once the id of the query is identified in the above statement,

query the sysquerymetrics view for the qtext field where the id is equal to

the id identified in the above query.

Similar statement as above, but restrict the output to only the query

with the maximum departure from the average elapsed time:

select id,

elap_max,

elap_avg,

"Percent Over Average" =

(convert(numeric(7,2),elap_max) -

convert(numeric(7,2),elap_avg)) /

convert(numeric(7,2),elap_avg )

274 | Chapter 9: Capturing Query Processing Metrics

Page 306: New.features.guide.to.Sybase.ase.15

* 100 -- Display as percent

from Properties..sysquerymetrics

where elap_avg > 0 -- prevent divide by zero

and (convert(numeric(7,2),elap_max)

- convert(numeric(7,2),elap_avg))

/ convert(numeric(7,2),elap_avg )

= (select max((convert(numeric(7,2),elap_max)

- convert(numeric(7,2),elap_avg))

/ convert(numeric(7,2),elap_avg ))

from Properties..sysquerymetrics

where elap_avg > 0) -- prevent divide by zero

Output:

id elap_max elap_avg Percent Over Average

--- ----------- ----------- ---------------------------

1958298036 5173 32 16065.6250000000

Find the query consuming the most physical I/O as a whole in a

given database:

select cnt, pio_avg, "total_pio" = cnt * pio_avg, qtext

from sysquerymetrics

where cnt * pio_avg = (select max(cnt * pio_avg)

from sysquerymetrics)

Output:

cnt pio_avg total_pio qtext

--- ---------- -------------- -----------------------------------

9 13685 123165 select count(*) from Rental

Display metric group and associated counts for each group from

sysquerymetrics:

select gid 'Group ID', count(*) '#Metrics'

from sysquerymetrics

group by gid

order by gid

Output:

Group ID #Metrics

1 290

2 353

3 759

4 253

5 986

Chapter 9: Capturing Query Processing Metrics | 275

Page 307: New.features.guide.to.Sybase.ase.15

Identification of Performance RegressionAs time passes, data volume and usage will change. Statistics for

your data will change or become irrelevant. The distribution of your

data may change. User volume can change. A server’s configuration,

patch level, or version may also change. In other words, the QP Met-

rics gathered on the initial date of a server implementation may not

be relevant to the QP Metrics collected at a future point in time.

Therefore, it is a good idea to compare recent QP Metrics informa-

tion against old QP Metrics information. A comparison can answer

the following questions: “Is my server providing the same perfor-

mance today with query XYZ as it did one year ago?” “After

upgrading my server from ASE 15 to 15.x, did the server’s query

performance change?”

To help with this comparison, the QP Metrics process provides a

mechanism to assist in the regression of performance between serv-

ers, or within the same server after a significant event or passage of

time. In order to perform this comparison, the database administrator

will need to save the current QP Metrics information and capture a

new set of QP Metrics on the same database. The QP Metrics com-

mand sp_metrics provides a mechanism to separate new and old QP

Metrics information. To separate the information, the sp_metrics sys-

tem procedure updates the gid column for the current metrics to a

user-specified value, while all new metric information from that point

forward will utilize the old gid, thus creating a logical separation of

performance metrics before and after a system event.

Note: Throughout this section, and in the Sybase documentation, the

separation of QP Metrics data is often referenced as creating a different

“running group.” This terminology is often used within this chapter, along

with references to the gid.

To instruct the QP Metrics system procedure to assign the current

running group a new gid, execute the sp_metrics procedure as

follows:

sp_metrics 'backup', "2"

go

276 | Chapter 9: Capturing Query Processing Metrics

Page 308: New.features.guide.to.Sybase.ase.15

Execution of the sp_metrics command as above will update the gid

column of the sysquerymetrics view for the current running group to

a value of 2.

Note: The sp_metrics command in the above example will assign the

current running group’s metric information the gid of 2; however, new

metric information will continue to be stored with a gid of 1.

A few rules to note with the assignment of a new gid for a batch of

QP Metrics entries:

� While the gid is a quoted identifier, it must be an integer.

� The QP Metrics cannot be backed up to the same gid more than

once within the same database.

� If the metrics for a gid are dropped with sp_metrics "drop" com-

mand, the gid can be reused.

� A new gid does not need to be sequential.

� You cannot back up to gid of 1, since this is the active gid.

Note: When a SQL statement is assigned to a new gid with the

sp_metrics command, subsequent executions of the same query in the

same database will generate the same hashkey, but not the same ID

within the sysquerymetrics view.

Comparing Metrics for a Specific Query betweenRunning Groups

In order to compare metric information between running groups,

perform the following steps:

1. Identify the SQL syntax that needs to be compared between

groups, and select the QP Metrics information to compare. In the

following example, the average physical I/O will be scrutinized.

2. Select the hashkey for the target SQL and find each gid for the

running groups needing comparison.

3. Use the SQL syntax and search in the qtext column, or use the

hashkey to search for the targeted SQL statements.

Chapter 9: Capturing Query Processing Metrics | 277

Page 309: New.features.guide.to.Sybase.ase.15

Example:

select q2.pio_avg, q2.gid,q1.qtext

from sysquerymetrics q1,

sysquerymetrics q2

where q1.hashkey=q2.hashkey

and q1.gid=1

and q1.sequence <= 1

and q2.sequence <= 1

and q1.hashkey=742334506

and q1.qtext='select count(*) from Rental where petFlag = "Y"'

Results:

pio_avg gid qtext

204165 4 select count(*) from Rental where petFlag = "Y"

183639 1 select count(*) from Rental where petFlag = "Y"

Note: In the above example, the search is performed on both hashkey

and qtext. Only one is necessary to complete the search.

Interpreting the above results, we observe the same query has experi-

enced a change of approximately 20,000 physical I/Os from the point

in time associated with gid of 1, in comparison to the point in time

associated with gid of 4. Performance degradation is hereby identi-

fied, and may need investigation by the database administrator.

Recommendation: Alone, the gid column is only a number to differenti-

ate time periods within the QP Metrics data. Document the usage of the

gid column to note the specific time periods associated with each gid

along with the particular server configuration, version, and database

sizes. This documentation may provide an explanation or basis for inves-

tigation of performance degradation.

278 | Chapter 9: Capturing Query Processing Metrics

Page 310: New.features.guide.to.Sybase.ase.15

Comparing Metrics for All Queries betweenRunning Groups

In order to compare the metrics for all queries between running

groups, the syntax is similar to the example where a single query’s

metrics are scrutinized. Here, we look at an example that compares

the average logical I/O for identical queries between the current

running group (where q1.gid = 1) and all running groups (where

q2.gid > 1).

select q1.lio_avg, q2.lio_avg, q2.gid,q1.qtext

from sysquerymetrics q1,

sysquerymetrics q2

where q1.hashkey=q2.hashkey

and q1.gid <> q2.gid

and q1.gid = 1

and q1.sequence <= 1

and q2.sequence <= 1

Results:

lio_avg lio_avg gid qtext

36 36 4 select count(*) from Tenants

1478 1471 4 select min(leaseStartDate) from Contracts

982 978 4 select tenantName, tenantID from Tenants

where tenantID = 5001

95336 205332 4 select count(*) from Contracts where

petDeposit = "Y"

In the above example, the SQL has identified four identical queries

between the first and fourth running groups (gid). Additionally, per-

formance degradation in logical I/O of over 100% is identified for the

fourth query. For this example database, investigation found there

was no index on the petDeposit column in the Contracts table. Addi-

tionally, the number of rows in this table more than doubled. The

analysis and QP Metrics data suggest an index may be useful for the

petDeposit column in the Contracts table.

Chapter 9: Capturing Query Processing Metrics | 279

Page 311: New.features.guide.to.Sybase.ase.15

Recommendation: For large queries, the SQL syntax will span more

than one row in the sysquerymetrics view. This will be represented with

an escalating sequence number combined with the same hashkey. For

large SQL, search on the hashkey and not the qtext column, since trun-

cation and wrapping will occur within the qtext field. Additionally, add the

sequence column to the search in order to limit the QP Metrics rows

retrieved, where sequence number is either 0 or 1. The metrics displayed

for escalating sequence numbers for the same query will be redundant.

Why Separate the QP Metrics Data by gid?

So why should the database administrator differentiate the newly

captured QP Metrics data from that of a previous point in time? The

answer lies in how the QP Metrics capture formulates the captured

performance metrics in the sysquerymetrics view. The capture algo-

rithm averages newly captured QP Metrics data to historical data that

matches the query’s hashkey. This historical data may not be indica-

tive of today’s performance. Therefore, the addition of the most

recent QP Metrics to a yearly average may not make performance

changes obvious. Consider the following example for an illustration

of this problem.

Let’s assume we have a server where QP Metrics capture was

enabled six months ago. For the Properties database, a frequently

issued query against the Rental table exhibited the following QP Met-

rics as gathered over these six months:

select elap_avg, pio_avg, cnt, qtext from sysquerymetrics

where pio_max > 20000

go

Output:

elap_avg pio_avg cnt qtext

123033 204165 20512 select count(*) from Contracts where

petDeposit = "Y"

Note the high number of executions for this SQL statement. Since the

number of executions is over 20,000 over a period of six months, the

current metric data will not skew the calculations very far since the

old metric data is so heavily weighted in this example. Of course,

degradation in performance could also be detected by paying close

attention to the “max” columns in the QP Metrics view, such as the

pio_max and elap_max fields. Observe, in the following output,

where the pio_max and elap_max fields are used to identify a

280 | Chapter 9: Capturing Query Processing Metrics

Page 312: New.features.guide.to.Sybase.ase.15

potential performance problem. Also note any recent executions of

the same SQL statement that approach or exceed the current pio_max

and elap_max values will have little impact to the pio_avg and

elap_avg values due to the weighting of the high counts for the

targeted query in this example:

select elap_avg, elap_max, pio_avg, pio_max, cnt, qtext from

sysquerymetrics

where pio_max > 20000

go

Output:

elap_avg elap_max pio_avg pio_max cnt qtext

123033 398033 204165 487165 20513 select count(*) from Contracts

where petDeposit = "Y"

Syntax Style Matters; Spacing Does NotWith the QP Metrics capture, a hashkey based on the SQL syntax is

used to link identical queries together when maintaining the averages

and counts. Queries that serve the same function can be treated dif-

ferently by the QP Metrics capture. Consider the following example.

Four SQL statements are executed from the same database, each

with an identical purpose. Each statement is keyed differently; how-

ever, one statement is resolved as a different query. Which query do

you expect will be treated differently by the QP Metrics capture

process?

1. select count(*) from Rental where tenantID >= 5000

2. SELECT COUNT(*) from Rental where tenantID >= 5000

3. select count(*)

from Rental

where tenantID >= 5000

4. select count(*) from Rental where tenantID >= 5000

From the sysquerymetrics view:

Queries 1, 3, and 4 all count toward the same metrics in the QP Met-

rics data, while the second query’s QP Metrics are aggregated under

a different hashkey. This is due to the unique hashkey generated by

Chapter 9: Capturing Query Processing Metrics | 281

Page 313: New.features.guide.to.Sybase.ase.15

the capitalization embedded into the second example. Capitalized

characters hash differently than their lowercase counterparts.

select hashkey, cnt, qtext

from sysquerymetrics where hashkey in (185085529,724053625,721956441)

go

Output:

hashkey cnt qtext

185085529 3 select count(*) from Rental where tenantID >= 5000

724053625 1 SELECT COUNT(*) from Rental where tenantID >= 5000

Note: The previous examples were executed on a server with the iso_1

character set with sort order bin_iso_1. Case sensitivity is not disabled

with these character sets and sort orders.

It should also be noted that SQL statements that are identical other

than the inclusion of embedded comments will hash to different keys.

This is evident in the next example.

Example with no comments:

select count(*)

from Rental

where tenantID >= 5000

go

Example with comments:

select count(*) -- comment

from /* comment */ Rental

where /* comment */ tenantID >= 5000

/* comment */

go

select hashkey, cnt, qtext from sysquerymetrics where gid = 1

and qtext like "%Rental%"

Different hashkey for each select statement:

1> select hashkey,cnt, qtext from sysquerymetrics where gid = 1

2> go

282 | Chapter 9: Capturing Query Processing Metrics

Page 314: New.features.guide.to.Sybase.ase.15

hashkey cnt

qtext

----------- -----------

--------------------------------------------------------------

--------------------------------------------------------------

--------------------------------------------------------------

1567762236 1

select count(*) from Rental where tenantID >= 5000

2131636074 1

select count(*) -- comment from /* comment */ Rental where /*

comment * / tenantID >= 5000 /* comment */

Note: The comments in this case resulted in different hashkeys for what

is, essentially, two functionally exact queries that are only different due to

the presence or absence of comments. Some third-party applications or

administrative tools will remove comments from SQL submitted to ASE.

In this scenario, the query with the comments and the query without the

comments will hash to the same key with the comments removed. In

other words, ASE will look at what it receives, and not necessarily what is

sent by some tools, when determining the hashkey value and storing the

metric information.

Clearing and Saving the MetricsIf it becomes necessary to remove the metrics for a database, the

sp_metrics system procedure is again utilized. The full syntax for

metrics data removal is:

sp_metrics "drop", "gid" [, id]

With the sp_metrics "drop" parameter, the metrics for a database can

be dropped at the query level, group level, or database level, depend-

ing on the parameters passed to the system procedure. To illustrate,

consider the following examples.

To drop specific metrics for one query in a database:

sp_metrics "drop", "2", "610817238"

-- This command drops the metrics for the query belonging to running

group “2”, with the specific id of 61081723.

To drop all the metrics belonging to a running group:

sp_metrics "drop", "2"

-- This command drops the metrics for all queries belonging to running

group "2".

Chapter 9: Capturing Query Processing Metrics | 283

Page 315: New.features.guide.to.Sybase.ase.15

To drop all the metrics stored in one database:

sp_metrics "drop", "gid"

-- "gid" is a required parameter for the sp_metrics system procedure.

Execute the sp_metrics command repetitively with the "gid" for all

groups within the database to clear all metric information from a

database.

Relationship between Stats I/O and QP MetricsI/O Counts

There is obvious overlap between the set option of statistics io and

the sysquerymetrics information. Both the statistics io set option and

the QP Metrics capture information regarding physical and logical

I/O counts. However the statistics io option drills further into the type

of physical I/O (apf or non-apf), while the sysquerymetrics reports on

the total of all two-part I/O. The following examples highlight the

overlap between these tools, using a first-time run after the

sysquerymetrics information was cleared.

set statistics io on

go

select count(*) from Properties..Rental

go

Output of stats I/O:

Table: Rental scan count 1, logical reads: (regular=115 apf=0 total=115),

physical reads: (regular=87 apf=28 total=115), apf IOs used=28

Total writes for this command: 0

Relevant information from sysquerymetrics:

lio_avg pio_avg qtext

115 115 select count(*) from Properties..Rental

Note the lio_avg and pio_avg from the sysquerymetrics data exactly

match the statistics I/O total physical and logical reads.

284 | Chapter 9: Capturing Query Processing Metrics

Page 316: New.features.guide.to.Sybase.ase.15

Information for Resource GovernorFor servers with the resource governor enabled, the QP Metrics infor-

mation holds data that benefits the database administrators on these

systems. For a specific query, as defined by the hashkey column in

the sysquerymetrics view, the QP Metrics information will capture

the number of aborts for that query when a resource limit is

exceeded. The QP Metrics for abort counts can be queried by includ-

ing the abort_cnt column in selects from the sysquerymetrics view.

For queries that reach limitations, the QP Metrics information will

not be aggregated with the totals for the specific query. For example,

if a resource limit is placed upon user1 restricting the user to a hard

limit of 50,000 I/O for a query, a query executed by user1 will abort

when the I/O limitation is hit. For the SQL aborted with an estimated

500,000 I/O for the submitted query, the 500,000 I/O is not aggre-

gated with the QP Metrics generated for the same query by other

users.

Note: Sybase ASE’s resource governor can enforce resource limits to

take action prior to or during the execution of a query. A query can be

aborted by the resource governor due to the breach of a resource limita-

tion prior to query execution or during the execution of a query. In each

scenario, the aborted query does not count toward the aggregated QP

Metrics totals.

Space Utilization ConsiderationsAs referenced throughout this chapter, the QP Metrics data is materi-

alized through a view. The view is materialized from the

sysqueryplans table. Each database in ASE 15 has a single copy of

the sysqueryplans table. This table’s data is placed according to the

“system” segment for each database. If unique query generation is

high on a particular ASE 15 database or a great many running groups

are retained for a database, the system segment will grow.

Tip: To prevent filling the system segment for databases with the QP

Metrics capture enabled, do not isolate the system segment for these

databases to a small device fragment.

Chapter 9: Capturing Query Processing Metrics | 285

Page 317: New.features.guide.to.Sybase.ase.15

LimitationsThe QP Metrics data and capture process has a few limitations. First,

the metrics do not track the latest time a given SQL statement was

executed. To work around this limitation, periodically back up your

sysquerymetrics data to a different running group or gid and note the

date and time range of data for the noted gid.

The second limitation is identification. The QP Metrics process

offers no security features. The process does not track who executed

a given SQL statement or a certain stored procedure. If this is impor-

tant for your installation, please consider the Sybase auditing

functionality. Another alternative to consider where accountability

is important is the MDA tables, where tables such as monProcess-

Activity, monProcessSQLText, and monSysSQLtext contain the

ServerUserID column to provide accountability by suid.

SummaryThe QP Metrics capture process appears to be a very useful tool for

the ASE 15 database administrator. An element of risk is apparent

when the employment of QP Metrics is not carefully set up or moni-

tored as the repository for QP Metrics resides on the system segment.

The risk can be minimized, provided the system segment for each

user database where the QP Metrics process is utilized is not limited

to a very small fragment of disk. The process is simple to set up and

maintain. QP Metrics provides an online history of your database’s

SQL usage. This SQL can be queried and analyzed. The QP Metrics

process operates without the installation of third-party applications or

the initiation of other Sybase ASE features that can carry additional

overhead, such as the Sybase audit process. Finally, the QP Metrics

process lays the groundwork for on-the-fly comparison of current

execution metrics with the saved baseline. This is a future enhance-

ment that may evolve from this new feature of ASE 15.

286 | Chapter 9: Capturing Query Processing Metrics

Page 318: New.features.guide.to.Sybase.ase.15

Chapter 10

Graphical Plan Viewer

Graphical Plan Viewer from Interactive SQLThe Graphical Plan Viewer (GPV) presents a graphical view of the

query plan of the current SQL text. The graphical view makes it eas-

ier to understand the performance and the statistics. GPV appears as a

tab on the Interactive SQL window. Interactive SQL can be started

either from Sybase Central or from a command line. Please refer to

the Sybase manuals for the details on Interactive SQL.

The following illustrations of the GPV are based on the Unix

version of Interactive SQL.

Make sure that $DISPLAY, $SYBASE, and $SYBROOT are set

correctly before starting dbisql. To start the dbisql client on Unix:

cd $SYBASE/DBISQL/bin

dbisql &

When the dbisql is called from Unix, it opens the window shown in

Figure 10-1. Fill in the required information and press OK.

Chapter 10: Graphical Plan Viewer | 287

Page 319: New.features.guide.to.Sybase.ase.15

In the SQL Statements pane, enter your SQL commands.

The results are displayed in

the Results pane on the

Results tab, as shown in

Figure 10-3. Click on the

Plan tab to view the plan

information for that query.

288 | Chapter 10: Graphical Plan Viewer

Figure 10-1

Figure 10-2

Figure 10-3

Page 320: New.features.guide.to.Sybase.ase.15

The following query was used to generate the GPV tree:

select avg(o_totalprice), avg(c_acctbal)

from orders_range_partitioned, customer_range_partitioned

where

o_custkey = c_custkey and

c_custkey = o_custkey and

o_custkey between 600 and 1000

The query plan for the above query is:

QUERY PLAN FOR STATEMENT 1 (at line 1).

7 operator(s) under root

The type of query is SELECT.

ROOT:EMIT Operator

|RESTRICT Operator

|

| |SCALAR AGGREGATE Operator

| | Evaluate Ungrouped COUNT AGGREGATE.

| | Evaluate Ungrouped SUM OR AVERAGE AGGREGATE.

| |

| | |MERGE JOIN Operator (Join Type: Inner Join)

| | | Using Worktable3 for internal storage.

| | | Key Count: 1

| | | Key Ordering: ASC

| | |

| | | |SORT Operator

| | | | Using Worktable1 for internal storage.

| | | |

| | | | |SCAN Operator

| | | | | FROM TABLE

| | | | | orders_range_partitioned

| | | | | [ Eliminated Partitions : 1 3 ]

| | | | | Table Scan.

| | | | | Forward Scan.

| | | | | Positioning at start of table.

| | | | | Using I/O Size 16 Kbytes for data pages.

| | | | | With LRU Buffer Replacement Strategy for data pages.

| | |

| | | |SORT Operator

| | | | Using Worktable2 for internal storage.

| | | |

Chapter 10: Graphical Plan Viewer | 289

Page 321: New.features.guide.to.Sybase.ase.15

| | | | |SCAN Operator

| | | | | FROM TABLE

| | | | | customer_range_partitioned

| | | | | [ Eliminated Partitions : 1 3 ]

| | | | | Table Scan.

| | | | | Forward Scan.

| | | | | Positioning at start of table.

| | | | | Using I/O Size 16 Kbytes for data pages.

| | | | | With LRU Buffer Replacement Strategy for data pages.

The above query plan is displayed graphically on the Plan tab below

the SQL Statements pane (see Figure 10-3). Each operator from the

above query plan is represented as a node in the plan viewer. The

cost of each operator is displayed as the percentage of the total cost.

In Figure 10-3, the cost of TableScan for orders_range_partitioned is

31.89% of the total cost. At 62.9%, the maximum resources are used

by the sorting operation on this table. The text query plan shown

above doesn’t provide the relative costing of each operator but the

GPV does. This feature of GPV makes it easier to find the bottleneck

of the query.

The tree can be expanded or collapsed by clicking on the – or +

symbols on the plan. You can collapse either the trunk of the tree

(Figure 10-4a) or a single branch (Figure 10-4b). This feature is par-

ticularly useful while analyzing large query plan trees.

290 | Chapter 10: Graphical Plan Viewer

Figure 10-4a

Page 322: New.features.guide.to.Sybase.ase.15

Double-clicking on any node (box) on the plan will populate the sta-

tistics relevant for that node in the bottom half of the pane. The

statistics include row count, logical I/O, and physical I/O. If the node

is at the lowest level of the tree, the statistics for only that particular

node will be displayed (see Figure 10-5). If the node has subtrees

underneath, the statistics will be shown for both the highlighted node

and the subtree (see Figure 10-6).

When the mouse is moved over any node, a small window called

a tooltip appears. The tooltip text provides details about that particu-

lar node without actually selecting that node. This feature is helpful

when comparing the statistics between the nodes. The tooltip text is

the small window within the Plan tab in Figures 10-5 and 10-6.

Chapter 10: Graphical Plan Viewer | 291

Figure 10-4b

Page 323: New.features.guide.to.Sybase.ase.15

The Statistics pane can be expanded by clicking on the up arrow,

which appears on the left side of the dividing line between the tree

and the statistics window (see Figure 10-7).

292 | Chapter 10: Graphical Plan Viewer

Figure 10-5

Figure 10-6

Page 324: New.features.guide.to.Sybase.ase.15

The Plan tab has three subtabs: Details, XML, and Text. The Details

tab shows the graphical tree and the statistics. The XML tab shows

the query plan in an XML format (see Figure 10-8). The Text tab

shows the traditional query plan (see Figure 10-9).

Chapter 10: Graphical Plan Viewer | 293

Figure 10-7

Figure 10-8

Page 325: New.features.guide.to.Sybase.ase.15

Graphical Query Tree Using Set OptionsA similar graphical query plan can be generated using the set option

plancost. The isql command is:

1> set statistics plancost on

2> go

1> select avg(o_totalprice), avg(c_acctbal)

2> from orders_range_partitioned, customer_range_partitioned

3> where

4> o_custkey = c_custkey and

5> c_custkey = o_custkey and

6> o_custkey between 600 and 1000

7> go

294 | Chapter 10: Graphical Plan Viewer

Figure 10-9

Page 326: New.features.guide.to.Sybase.ase.15

Following is the output from the above query:

The above tree looks very similar to the one generated by the GPV.

Even though this tree does not provide the percentages like the plan

viewer, it shows all the statistics at the same time. From the statistics

in the above tree, logical and physical I/Os for the orders_range_

partioned table are 581 and 0 respectively. The values for the logical

and physical I/Os for the sort operation on the same table are 96 and

84 respectively. Since the physical I/Os are more expensive than the

logical reads, it can be concluded that the sort operation on the

orders_range_partitioned table is the most expensive operation in this

query. This conclusion is the same as the one from the GPV.

Chapter 10: Graphical Plan Viewer | 295

Figure 10-10

Page 327: New.features.guide.to.Sybase.ase.15

SummaryWith ASE 15, Sybase not only improved the performance of the

queries but also provided additional tools to easily identify the per-

formance issues. A traditional query plan shows what indexes are

being used and which tables are undergoing table scans. The DBA

still has to find the statistics from a different source and use them to

analyze each operation identified by the query plan. This is a cumber-

some process, especially when troubleshooting a long query with a

large number of table joins. Although there are some dbcc commands

that generate detailed statistics for the query and explain the rationale

for choosing a particular query plan, the output from these commands

can easily run into hundreds of pages and be difficult to decipher.

It usually takes more time to identify the problem query or query

operation than to fix it. Both the GPV and the new set option provide

a very easy way of identifying the bottlenecks in the query execution.

These new features should greatly improve the efficiency of DBAs

and thus reduce the total cost of ownership.

296 | Chapter 10: Graphical Plan Viewer

Page 328: New.features.guide.to.Sybase.ase.15

Chapter 11

Sybase Software AssetManagement (SySAM) 2.0

This chapter discusses the new release of SySAM. It addresses

licensing, the new reporting features of SySAM, and the necessity for

implementing version 2.0 of SySAM. Special attention will be given

to the SySAM licensing environments, the Try and Buy option, how

to acquire a license, and the reporting options that are provided.

Detailed information about the product is available in the Sybase

documentation. The goal of this chapter is to provide an overview of

the functionality of SySAM 2.0.

IntroductionAlthough concerns over software piracy have driven vendors to

develop methods to ensure compliancy, many software companies

still do not enforce their own licensing agreements. In part, this is due

to the lack of tools that provide the necessary information about their

customers’ noncompliance. This has also been true for the Sybase

product set. With ASE 15, Sybase is addressing software license

compliance.

Sybase has partnered with Macrovision’s FLEXnet product set to

provide tools for establishing, monitoring, and reporting product

compliance for Sybase products. In early 2005, Macrovision was

named to the Software Development Times magazine’s top 100 com-

panies for its FLEXnet Software Value Management platform.

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 297

Page 329: New.features.guide.to.Sybase.ase.15

Prior to ASE 15SySAM was first introduced with ASE 12.0 to provide a method to

manage optional licensed features of ASE. For a feature to be made

available to the client, a certificate (provided with the purchase) had

to be registered with SySAM. Once the feature was registered and

ASE recycled, the feature could be activated.

In ASE 12.0 the following features were available after register-

ing the license with SySAM:

� ASE_HA — ASE High Availability System

� ASE_DTM — Distributed Transaction Management

� ASE_JAVA — Java in the Database

� ASE_ASM — Advanced Security Mechanisms (SSL, FGAC,

etc.)

With ASE 12.5, six additional features were made available after reg-

istering the license:

� ASE_EFTS — Enhanced Full Text Search

� ASE_EJB — Enterprise Java Bean Server

� ASE_SXP — SQL Expert

� ASE_XFS — External File System Access through proxy tables

� ASE_XRAY — DBXray

� ASE_DIRS — LDAP Option

In most cases, since these options were not being used, the database

administrator never implemented SySAM. ASE would still come up,

but a message would be written to the errorlog, indicating that no

licensing information was available and that ASE was being brought

up with no additional features:

00:00000:00000:2005/02/26 06:00:54.90 kernel Warning:

There is no valid license for ASE server product. Server is boot-

ing with all the option features disabled.

298 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Page 330: New.features.guide.to.Sybase.ase.15

With ASE 15With ASE 15, Sybase is adapting.

With the proliferation of regulatory requirements such as

Sarbanes-Oxley (SOX), companies are being required to account for

the products that they utilize. This includes software products. For

most companies, the issue is not what to monitor or account for, but

how to account for it. Companies like Macrovision are addressing the

issue by providing tools for the deployment of software product

licenses, for monitoring the usage of those products, and for reporting

of compliance or noncompliance of each product.

Although Sybase could have developed their own product to

address the issues related to compliance, they realized that other ven-

dors had already developed quality products that could be

incorporated into Sybase’s product set. Macrovision was the vendor

of choice — not only because of their product but also because of

their company stability and market position within the product licens-

ing industry.

For compliance monitoring to be effective, it has to be real time.

SySAM 2.0 addresses this issue by using a “heartbeat.” The heartbeat

of SySAM periodically checks the compliance of the product to

determine if the license is still effective and if the number of

deployed licenses provides for the number of active sessions request-

ing the product or service.

Components of Asset Management

SySAM Server

The SySAM server is also referred to as the license server. It can be

either local to the current machine where the Sybase product is

installed or on a networked server where all Sybase licenses are man-

aged. Redundant license servers can be utilized by those companies

concerned about disaster recovery.

The license management agent or daemon process is lmgrd. It is a

process started in Unix or a service within Microsoft Windows.

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 299

Page 331: New.features.guide.to.Sybase.ase.15

SySAM Utility Program — lmutil

The executable lmutil can be used to run the FLEXnet utilities. The

following example shows the results of running the help utility. The

documentation for FLEXnet is available from Sybase.

/sybase15/SYSAM-2_0/bin 16> ./lmutil help

lmutil - Copyright (C) 1989-2002 Macrovision Corporation

usage: lmutil lmborrow -status

lmutil lmborrow -clear

lmutil lmborrow {all|vendor} dd-mmm-yyyy:[time]

lmutil lmborrow -return [-c licfile] [-d display_name] feature

lmutil lmdiag [-c licfile] [-n]

lmutil lmdown [-c licfile] [-q] [-all] [-vendor name] [-force]

[-help]

lmutil lmhostid [-internet|-user|-display|-n|

-hostname|-string|-long]

lmutil lminstall [-i infile] [-o outfile]

[-overfmt {2, 3, 4, 5, 5.1, 6, 7.1, 8}]

[-odecimal] [-maxlen n]

lmutil lmnewlog [-c licfile] vendor new-file, or

lmutil lmnewlog [-c licfile] feature new-file

lmutil lmpath -status

lmutil lmpath -override {all | vendor } path

lmutil lmpath -add {all | vendor } path

lmutil lmremove [-c licfile] feature user host display

lmutil lmremove [-c licfile] -h feature host port handle

lmutil lmreread [-c licfile] [-vendor name] [-all]

lmutil lmswitchr [-c licfile] vendor new-file, or

lmutil lmswitchr [-c licfile] feature new-file

lmutil lmstat [-c licfile] [lmstat-args]

lmutil lmswitch [-c licfile] vendor new-file, or

lmutil lmswitch [-c licfile] feature new-file

lmutil lmver flexlm_binary

lmutil lmver flexlm_binary

lmutil -help (prints this message)

SySAM Reporting Tool

The Macrovision tool called Samreport has been incorporated into

the SySAM environment. It provides reports on license usage. It is

started using the report command in the $SYBASE/SYSAM-2_0/

300 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Page 332: New.features.guide.to.Sybase.ase.15

samreport/ directory. You can also start Samreport in the Sybase

Central external Utilities folder:

The default report file can be found in $SYBASE/SYSAM-2_0/log/

directory. The directory and report file can be set to another directory

or file by specifying the new specifications in the SYBASE.opt file.

System Environment Variables

If you are using a network license server, there is one system envi-

ronment variable that you may choose to set — SYBASE_

LICENSE_FILE. This variable is used by the ASE server at startup to

locate the features license file on the specified network license server.

The format for the servers specified in the SYBASE_ LICENSE_FILE

variable is: port@machine:port@machine2: … :port@machine n (a

semicolon is used as the separator on Windows). The SYBASE_

LICENSE_FILE variable was used prior to 15. It is no longer required

for a local license server.

Example (Unix csh):

setenv SYBASE_LICENSE_FILE 2031@moon:2031@sun:2031@stars

If you plan to use this variable, you should define it in your

RUN_SERVER file. The use of this variable is strictly optional

because the networked servers can be specified in a license file.

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 301

Figure 11-1

Page 333: New.features.guide.to.Sybase.ase.15

License File

In a standalone licensed environment, the license file contains an

entry for license server location and the registered feature licenses. In

pre-ASE 15 servers, the file was named license.dat. With ASE 15,

the files are suffixed with “.lic” (i.e., ASE150_EE_Beta.lic) and are

found in the $SYBASE/SYSAM-2_0/licenses directory. The follow-

ing example shows a sample license — $SYBASE/$SYBASE_

SYSAM/licenses/ASE150_EE_Beta.lic. Detailed information about

the components of the license file can be found in the FLEXnet

Licensing End User Guide.

Example:

SERVER dublin 94059f65 1700

VENDOR SYBASE

# Package definition for ASE Enterprise Edition.

PACKAGE ASE_EE SYBASE 2005.0910 COMPONENTS=ASE_CORE OPTIONS=SUITE \

ISSUED=30-mar-2005 SIGN2="0557 5FCE CDFC 7502 B5E8 23E0 46A2 \

A26C ADA0 97FB D0C7 352B 2456 EB53 97CA 18A2 BA17 76B0 A951 \

6C26 26F8 2029 5D1E B4AC 4302 4C96 5008 7F0E 465F F49E"

# ASE EE with SR license type

INCREMENT ASE_EE SYBASE 2005.0910 10-sep-2005 uncounted \

VENDOR_STRING=SORT=100;PE=EE;LT=SR HOSTID=ANY \

ISSUER="CO=Sybase, Inc.;V=15.0;AS=A;MP=T" ISSUED=30-mar-2005 \

NOTICE="ASE 15.0 Beta Test License" SN=123456-1 SIGN2="0FEF \

3CC4 AE10 025B 365D 4213 8127 88C0 7FD9 1330 8DDC 70F2 2262 \

B30B E6CB 1751 39BA FD75 EC42 052E 2509 94E8 611F 1E5B 18C5 \

2998 F76B 7A8C 5DCD 4588"

Sybase recommends that SYBASE_LICENSE_FILE not be set for a

local license server. For 15, all files with a .lic extension that are

found in the $SYBASE/SYSAM-2_0/licenses directory will be used.

These files may contain licenses or pointers to the locations of license

servers that can be used to obtain licenses.

For a networked license server, the recommendation is that you

should only create a .lic file that contains the servers that are partici-

pating in the license network. SYBASE_LICENSE_FILE is not

required if you use this method. Unless a licensed feature will only be

utilized on the local server as opposed to across all servers, no other

.lic files will need to be defined on the local server. The following

example shows how to define a SYBASE.lic file for a networked

license server environment with two license servers.

302 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Page 334: New.features.guide.to.Sybase.ase.15

Example:

SERVER dublin 94059f65 1700

SERVER newyork 8750a3d4 1700

VENDOR SYBASE

Options File

The options file contains entries for controlling the operating parame-

ters of the licensing environment. With ASE 15, the file is named

SYBASE.opt. The name of the options file corresponds to the name

specified in the VENDOR option in the licensing file. The following

example shows a sample license — $SYBASE/$SYBASE_SYSAM/

SYBASE.opt. Detailed information about the components of the

options file can be found in the FLEXnet Licensing End User Guide.

Example:

DEBUGLOG +/sybase/sybase15_beta/SYSAM-2_0/bin/SYBASE.log

REPORTLOG +/sybase/sybase15_beta/SYSAM-2_0/bin/SYBASE.rl

In addition to specifying the location of the report logs, the options

file can be used to designate and limit the use of licenses by certain

hosts. For more information about this functionality, see the FLEXnet

Licensing End User Guide provided with the Sybase documentation.

Properties File

The properties file contains user-defined parameters that optionally

define information about the licensed environment. This file is found

on each ASE 15 server in the $SYBASE/ASE-15_0/sysam directory.

A template file (sysam.properties.template) is provided as an exam-

ple for you to use to set up the environment. When you build an ASE

server, a server-name.properties file is automatically created. You

can modify the resulting file for your environment.

$SYBASE/ASE-15_0/sysam/ sysam.properties.template

2634B4D789DBB871E52B2C29807368E097FCC89088005C479F

email.smtp.host=smtp

email.smtp.port=25

[email protected]

[email protected]

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 303

Page 335: New.features.guide.to.Sybase.ase.15

email.severity=NONE

PE=EE

LT=SR

The SySAM Environment

Three options are available for setting up the SySAM environment.

The choice of which type of licensing environment will need to be

established should be based on the requirements of the organization.

The organization will need to consider normal business processing as

well as disaster recovery in determining the appropriate SySAM

environment.

In prior releases of ASE, there was a file in the SySAM licenses

directory (sybpkg.dat) that needed to be added to the LM-LICENSE_

FILE for packaged or bundled licenses. This file is no longer neces-

sary in ASE 15. In ASE 15, the additional paths can be specified in

SYBASE_LICENSE_FILE using the SERVER keyword or specifying

the host and listening port in a user-defined .lic file.

Example:

In SYBASE.lic:

SERVER host host-id port

Setting SYBASE_LICENSE_FILE variable:

setenv SYBASE_LICENSE_FILE port@host

Standalone License Server

For an environment where the licenses for each machine will be

managed local to the machine, the environment can be set up as a

standalone serverless system. In this setup, licenses are checked out

and verified using the local license file. This file was also utilized by

prior versions of ASE. This choice of environment would be used for

those companies that want to have all or part of their environments

independent and nonreliant upon other systems. For example, in

some companies certain applications are determined to be “mission

critical” and therefore may need to be recovered first. In some cases,

these applications may be the only ones recovered at a disaster recov-

ery site. In order to provide independence of one environment from

the rest of the company’s hardware/application environments, the

licensing would have to be local to this machine.

304 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Page 336: New.features.guide.to.Sybase.ase.15

Networked License Server

The second option is a networked license server that other ASE serv-

ers can access by the use of a company’s network. For most

organizations with multiple application environments on the same or

different hardware platforms, the networked license server will be the

appropriate choice. Based on the disaster recovery requirements of

the business, some application environments may need separate net-

worked license servers for production and for development and

quality assurance (QA) environments. Consider a business where one

or two applications are critical to the business. In the event a disaster

were to occur and these applications were required to be recovered in

order for the company to stay in business, a separate license server

would need to be utilized by these critical applications. For all other

environments, such as development and user acceptance, a single net-

worked license server could be defined to manage the licenses for

these environments. The use of multiple license servers should be

determined based on the requirements of the business. During instal-

lation if a license server is chosen, an appropriate $SYBASE/

SYSAM-2_0/licenses/SYBASE.lic file will be created containing a

pointer to the server. For example:

SERVER altanta ANY

USE_SERVER

The following example uses the SYBASE_LICENSE_FILE variable

in a Unix system to set the listening port for the license server. The

example assumes that you are running on another server.

On Server - Atlanta

setenv SYBASE_LICENSE_FILE 1700@dublin

Although the examples for the standalone and networked license

server are the same, the difference is that in the networked environ-

ment, the local server variable setting points to the listening port

where the actual license server is running. In the networked server

example, the SYBASE_LICENSE_FILE variable on server “Atlanta”

is pointing to the listening port 1700 on server “dublin.”

Redundant License Server

The third option is for those sites that have failover environments

where if the primary site is unavailable (such as in the event of a nat-

ural disaster), a second site is ready and available to take over the

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 305

Page 337: New.features.guide.to.Sybase.ase.15

work that normally would have been processed on the primary

machine. This option is also used for load balancing environments

where a query can be resolved on one of several machines that have

redundant data. When using a redundant license server, all licenses

will have to be entered on both the primary and the replicate servers.

Remember that there are several ways for data to be replicated onto a

secondary server.

In order to set up a redundant license server, at least two license

server entries are added to the .lic file. The order of the entries speci-

fies the order in which license servers are utilized. In the event

SySAM fails to attach to the first server, the next server specified will

be contacted in an attempt to secure a license. Remember that

SySAM 2.0 provides a run-time grace period of 30 days for ASE. So

if a license server goes down, the administrator will have 30 days to

bring it back before ASE will shut down. If the ASE_CORE (base

product) license cannot be reobtained, then ASE will shut down, and

those optional features will be disabled.

On Unix:

setenv SYBASE_LICENSE_FILE 700@dublin,1700@london,1700@toyko

On Windows:

set SYBASE_LICENSE_FILE 700@dublin;1700@london;1700@toyko

Acquiring Product LicensesProduct licenses are not required to activate ASE and other Sybase

products. A “grace period” has been defined for each product and

therefore the product can be activated prior to the license being

acquired from Sybase. Once the grace period has been exceeded, the

product will cease to work. The following example shows the error

messages that will be sent to the ASE errorlog. The grace period may

vary by Sybase product or option. There are products that do not

have a grace period.

The following example shows a typical grace period expiration.

License text now looks like:

SySAM: Checked out graced license for 1 ASE_CORE (2005.0819) will expire

Sat Oct 22 15:00:27 2005

306 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Page 338: New.features.guide.to.Sybase.ase.15

The above shows an install-time grace period. A run-time grace

period would include the expiration date, where a permanent license

is not in place, and also the first few words of the SIGN2 (ID) field

from the license:

SySAM: Checked out license for 1 ASE_CORE (2005.1231/31-dec-2005/0E7A

5668 9856 4ABF) will expire Sun Jan 01 00:00:00 2006.

The “expired” message now looks like:

SySAM: License for ASE_CORE could not be checked out within the grace

period and has now expired.

The grace period also has a secondary purpose. If for some reason the

license server is unavailable and the product or option is known by

ASE to have utilized a valid license recently, the product will revert

to a grace period and allow ASE and other options to be active. The

grace period assumes that you will take necessary steps to quickly

resolve the issue with the license server.

A grace period is also activated as a license approaches its expi-

ration date. The goal of this grace period is to allow the customer to

continue business as usual until their license(s) can be renewed. The

notification system will inform the user (via the errorlog, or email if

configured) in a half-life fashion; e.g., install grace period of 30 days

gets initial message, then after 15 days, then 7.5 days, etc., down to

the last couple of hours. ASE will shut down when a license expires.

Each license contains the following information about the prod-

uct or option:

� Type of license — Server (Server - SR, Standby Server - SV,

etc.) or CPU (CPU - CP, Standby CPU - SF, etc.)

� Period of time for which the license is valid

� Entitlement quantity — The number of licenses that have been

purchased for the product or feature. In the case of site licenses,

this number can be increased as the quantity of licenses required

increases at the customer site. The customer will need to acquire

the additional entitlements from the Sybase Product Download

Center (SPDC). The license key will always contain a specific

number of licenses, whereas the download center has a record of

the entitlements purchased by the customer.

� License key

� License local host or host/port number for networked license

server

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 307

Page 339: New.features.guide.to.Sybase.ase.15

The following is an example of the license key for the Developers

Edition license that comes with ASE on a Unix platform.

/sybase15_beta/SYSAM-2_0/licenses 3> cat SYBASE_ASE_DE.lic

# Package definition for ASE Developer Edition

PACKAGE ASE_DE SYBASE COMPONENTS="ASE_CORE ASE_JAVA ASE_ASM ASE_DIRS \

ASE_DTM ASE_HA ASE_MESSAGING ASE_PARTITIONS ASE_ENCRYPTION" \

OPTIONS=SUITE SUPERSEDE ISSUED=18-aug-2005 SIGN2="0C1E F1F1 \

7727 6D67 C7D9 7AA8 B0C1 1EF3 93E7 675A 9DA1 7D64 878D 537D \

AB75 1CDD 850C 2290 2572 612B 1B5A D1AA D4A9 B17C 5DF1 C7AE \

ACD7 BB0F E448 ADC0"

# ASE Developer Edition

INCREMENT ASE_DE SYBASE 2007.1231 permanent uncounted \

VENDOR_STRING=SORT=900;PE=DE;LT=DT HOSTID=ANY \

ISSUER="CO=Sybase, Inc.;V=15.0;AS=A;ME=1;MC=25;MP=0;CP=1" \

ISSUED=18-aug-2005 NOTICE="ASE Developer Edition - For \

Development and Test use only" TS_OK SIGN2="1D6C 07BE B477 \

A3E1 40B1 AF1D 9A54 4140 FF73 F986 BE25 08C3 1EFB 69D7 5B49 \

0E55 F834 4757 B472 DBF3 F8F7 DED2 D0E5 C5C5 59B8 45A1 AEE4 \

86DC 1E72 1DFD"

How do you acquire a license key? License keys are not provided

with the software release package. Once the product has been pur-

chased, a license authorization code will be provided to the customer

that will allow the customer to retrieve a license key. All license keys

are acquired by accessing the SPDC website (https://sybase.sub-

scribenet.com). The website walks you through the process of setting

up and downloading the license key. Once the SPDC generates a

license, the license will need to be saved to your local machine and

then it can be registered with SySAM.

What if I want to perform a trial evaluation of a product that I may or

may not purchase? In the event that you want to evaluate a product,

Sybase will provide a license authorization code that will allow a

temporary or short-term license key to be generated from the SPDC.

At the end of the evaluation period, the product will no longer func-

tion. If a permanent license is acquired prior to the end of the

evaluation period, as long as the license is installed prior to the termi-

nation of the trial period, the product will continue to work with no

interruption of service.

What if I have a licensed server on my laptop and I disconnect it from

the network for an extended period of time. Will the product continue

to work? SySAM 2.0 supports the concept of “borrowing” a license.

308 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Page 340: New.features.guide.to.Sybase.ase.15

In this case, the SySAM utility lmutil lmborrow will be used to check

out a license for a specified period of time. Once the machine is reat-

tached to the network, you would need to “return” the license using

lmborrow again.

Product LicensesWith the initial release of ASE 15, the only product utilizing SySAM

2.0 will be ASE 15. SySAM 2.0 will be able to manage licenses for

Sybase products that are not utilizing 2.0 features, such as grace peri-

ods. Eventually, all Sybase products (EAServer, Replication Server,

PowerDesigner, etc.) will adopt SySAM 2.0 licensing. When they do,

they will implement grace periods. The length of time for each grace

period may vary across products, but the principles will not. The

products will implement the same types of grace periods as men-

tioned above.

Try and Buy

The Try and Buy option refers to a potential client being able to

download a trial version of the software, evaluate the product during

the allotted grace period, and if they purchase the product before the

end of the end of the grace period, the product will continue to run

without interruption. The Try and Buy option provides a means for

the potential client to evaluate the product knowing that there is a

limited amount of time for the evaluation. If longer evaluation peri-

ods are required, the potential client would need to contact a sales

representative to address the time extension options.

License Activation

No new restrictions have been placed on license activation with

SySAM 2.0. For licensed options (i.e., HA), the license activation is

dependent on whether the option is dynamic or static. If the option is

dynamic, once the option has been activated by sp_configure, the

license will be acquired automatically without the need for ASE to be

recycled. Licenses that are graced may be acquired at the next heart-

beat if they are now available. The heartbeat is a predefined interval

used by SySAM to verify the legitimacy of a license. The interval of

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 309

Page 341: New.features.guide.to.Sybase.ase.15

the heartbeat is set by Sybase and is currently not adjustable. If at the

time of the heartbeat the license changes state (i.e., the license has

expired), ASE will have to be recycled for static options to be

activated.

SySAM AdministrationThe administration of the SySAM environment can be handled in one

of three ways: manually, via utilities, or via Sybase Central. A new

plug-in has been developed for Sybase Central for managing the

SySAM environment. In order to utilize the plug-in, the SySAM

Agent has to be running either on the license server or the local

server in a networked licensed environment. The FLEXnet Licensing

End User Guide provided with ASE 15 contains detailed information

and instructions on administering the licenses. The SySAM Agent

can be started by using the $SYBASE/$SYBASE_SYSAM/bin/

sysam script. $SYBASE/$SYBASE_SYSAM/bin/sysam utilizes

$SYBASE/$SYBASE_SYSAM/licenses to specify the location of

the license directory. The SySAM script can be used to start and stop

the server and perform some monitoring and diagnostics. The script

is simply a wrapper around the FLEX utilities. But since it also sets

the license path to the SYSAM-2_0/licenses directory, it ensures a

standard operating model.

sp_lmconfig

A new stored procedure is available to be used in the SySAM envi-

ronment. sp_lmconfig uses the license manager to gather information

about the licensed environment and display it within the isql session.

The example below is of a licensed network server.

1> sp_lmconfig

2> go

Parameter Name Config Value

----------------- ------------

edition EE

license type SR

smtp host smtp

310 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Page 342: New.features.guide.to.Sybase.ase.15

email recipients [email protected]

email severity NONE

smtp port 25

email sender [email protected]

License Name Version Quantity Status Expiry Date

--------------- ---------- -------- ------------ --------------------

ASE_HA null 0 not used null

ASE_DTM null 0 not used null

ASE_JAVA null 0 not used null

ASE_ASM null 0 not used null

ASE_EJB null 0 not used null

ASE_EFTS null 0 not used null

ASE_DIRS null 0 not used null

ASE_XRAY null 0 not used null

ASE_MESSAGING null 0 not used null

ASE_ENCRYPTION null 0 not used null

ASE_CORE 2005.12160 1 OK Permanent

ASE_PARTITIONS 1 graced Nov 10 2005 8:08AM

Property Name Property Value

------------- --------------

PE EE

LT SR

ME null

MC null

MS null

MM null

CP null

AS A

(return status = 0)

The variable LT in this example indicates that this is a license server.

For a server on a networked license environment that is not a licens-

ing server, the value for this variable would be “null.”

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 311

Page 343: New.features.guide.to.Sybase.ase.15

ASE 15 SySAM Upgrade ProcessPre-ASE 15, you could choose to not implement SySAM on an ASE

server that had no product that required a license key. With ASE 15,

you will have to implement SySAM. The grace periods will cause a

product to become inoperable in the event the grace period is

exceeded. When ASE is initially installed, the grace period will

begin. At that time, you will have a limited amount of time to imple-

ment SySAM and acquire your license key.

If you have never implemented SySAM, the upgrade process will

not be necessary. All you will need to do is define the SySAM

options, acquire a license key for ASE, and add the license key to the

license file qualified with the “.lic” (i.e., ASE150_EE.lic). Finally,

you will need to start SySAM using the sysam Unix command or

Windows process. A license will be checked out when ASE starts up.

The following example displays the lines of the ASE errorlog show-

ing where the license has been checked out.

SySAM: Using licenses from: /sybase/sybase15_beta/SYSAM-2_0/licenses

SySAM: Checked out license for 1 ASE_CORE (2005.1231/31-dec-2005/0E7A

5668 9856 4ABF) will expire Sun Jan 01 00:00:00 2006.

This product is licensed to: ASE 15.0 Beta Test License

Checked out license ASE_CORE

Adaptive Server Enterprise (Enterprise Edition)

When the server is shut down, the license will be checked back in.

SySAM: Checked in license for 1 ASE_CORE (2005.1231/31-dec-2005/0E7A

5668 9856 4ABF).

If you are running SySAM 1.0, you will have several options based

on the SySAM environment that you are currently using. The follow-

ing upgrade scenarios assume that you are upgrading to ASE 15.

If you are in a standalone environment, you will need to define

the environment variables and the SySAM options, acquire a license

key for ASE, and add the license key to the license file. This is the

same process as if you had never implemented SySAM 1.0.

Note: If you have other pre-ASE 15 servers on this machine, they can

coexist with the ASE 15 server. The pre-ASE 15 servers will continue to

check out their licenses from SYSAM_1_0/licenses/license.dat while the

ASE 15 server will use the SYSAM_2_0/licenses/*.lic file(s).

312 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Page 344: New.features.guide.to.Sybase.ase.15

If you are using a networked license server that is currently using

SySAM 1.0, you have two options. You can either implement

SySAM 2.0 on a new network license server, or upgrade the SySAM

1.0 server to SySAM 2.0. The second option will be able to manage

licenses for pre-ASE 15 servers and ASE 15 servers.

Setting up a new networked license server uses the same steps as

noted for a new implementation of SySAM 2.0. Once the SySAM

environment has been set up, the program $SYBASE/$SYBASE_

SYSAM/bin/sysam will have to be started on the new networked

license server. The only additional steps will be at the local server

that is requesting the license from the networked license server. The

license file — SYBASE.lic — may need to be updated to include the

networked license server node and port (though this is configured

during the installation). The most common problem will be in the

specification of the fields. Be sure to specify the SERVER and the

VENDOR options in the local license file.

SySAM ReportingThere are several canned reports that are available with SySAM 2.0.

Each report belongs to one of three groups of reports: summary,

server usage, or raw data.

Summary Reports

� Usage Over Time — Shows the maximum number of licenses

that have been used over the specified time period. The default

time period is one second. A granularity of hour will provide the

same report as the High Water Mark report.

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 313

Page 345: New.features.guide.to.Sybase.ase.15

314 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Figure 11-2

Figure 11-3

Page 346: New.features.guide.to.Sybase.ase.15

� High Water Mark — A summarized report of the maximum

number of licenses that have been used. This report is similar to

the Usage Over Time report. The default time period is hourly.

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 315

Figure 11-4

Figure 11-5

Page 347: New.features.guide.to.Sybase.ase.15

� Summary Barchart — A summarized report in bar chart format

that can show the number of licenses checked out, the percentage

of license hours used, or the number of available licenses used.

The format compares usage by users and features. The report is

based on information that can be displayed in the Usage Sum-

mary report.

316 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Figure 11-6

Figure 11-7

Page 348: New.features.guide.to.Sybase.ase.15

� Usage Efficiency — Shows the maximum number of licenses

checked out over the selected period of time. This report will be

useful in determining when additional licenses need to be

purchased.

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 317

Figure 11-9

Figure 11-8

Page 349: New.features.guide.to.Sybase.ase.15

� Usage Summary —A textual report of the usage of each licensed

feature.

318 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Figure 11-10

Figure 11-11

Page 350: New.features.guide.to.Sybase.ase.15

Server Usage Reports

� Server Coverage — Shows the uptime and downtime for the

license server. Downtime is only shown if it exceeds 15 minutes.

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 319

Figure 11-12

Figure 11-13

Page 351: New.features.guide.to.Sybase.ase.15

Raw Data Reports

� Raw — Shows the individual license usage events in chronologi-

cal order. This data can be used as input into user-defined reports

and can be used to troubleshoot licensing issues.

320 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Figure 11-14

Figure 11-15

Page 352: New.features.guide.to.Sybase.ase.15

More detailed information about each report can be found in the

SAMreport Users Guide provided with SySAM 2.0.

SummarySoftware license compliance has become a major issue for both the

vendor and the client. The vendor has several reasons for wanting the

client to be compliant. First, the vendor wants to get paid appropri-

ately for the use of the product. Second, the vendor wants to ensure

that their product is being used. Although people may think the ven-

dor does not care if the product sits on the shelf or not, in reality a

product sitting on the shelf means that the product is of no use to the

client. Third, for planning purposes, a vendor wants to know the

usage growth pattern. If the product usage is growing, they may be

able to work with the client to come up with pricing options for addi-

tional licenses, identify potential product weaknesses and strengths,

and recommend additional complementary products.

For the client, the knowledge that they are paying for what they

are using alleviates any concern about over- or underutilization of the

product that has been purchased for the company. The client also

wants to know when to plan for additional licenses.

Chapter 11: Sybase Software Asset Management (SySAM) 2.0 | 321

Figure 11-16

Page 353: New.features.guide.to.Sybase.ase.15

SySAM addresses all of these issues. It ensures compliance. It

provides a means for determining growth and usage patterns. It

allows the potential client to “kick the tires” before purchasing the

product. It even allows the client a grace period for when they forget

to pay the bill on time.

SySAM is required for ASE 15. Choosing the licensing environ-

ment will need to be the first issue that is addressed prior to installing

ASE 15. Once installed and active, the reports will provide the client

with ways to justify the usage and any additional costs associated

with the product.

322 | Chapter 11: Sybase Software Asset Management (SySAM) 2.0

Page 354: New.features.guide.to.Sybase.ase.15

Chapter 12

Installation of ASE Servers

ASE installation is a topic that is not often taught through formal

instruction. It seems to be assumed the database administrator can

thumb through the ASE installation manuals and complete the task.

While this is often true, a simpler, step-by-step approach is needed.

The approaches outlined in this chapter contain some examples of

how to install ASE and its components, and offer screen shots, steps,

and command-line examples from three different methods of ASE

installation. This chapter highlights how to install ASE, along with

how to install some of the ASE components such as ASE Backup

Server and the Job Scheduler. The installation process descriptions

attempt to remain platform independent; however, the screen shots

and examples are representative of the installation steps for Solaris

systems and Windows environments.

For the installation of ASE, this chapter offers instructions for

three installation types:

� Installation with a resource file

� Installation with the srvbuild executable

� Installation with the dataserver binary

The authors of this book have installed countless ASE servers, using

all of the mentioned installation methods in this chapter. Installing

ASE through the GUI and launching with srvbuild for Unix environ-

ments or syconfig.exe for Windows environments is the simplest

method for beginners. This method offers a step-by-step question/

answer interface, which also reminds the database administrator of all

installation options available.

Chapter 12: Installation of ASE Servers | 323

Page 355: New.features.guide.to.Sybase.ase.15

As database administrators become more advanced and are

responsible for a multitude of servers, the resource file installation

method is perhaps the best installation method. It will reduce the

amount of time the database administrator spends performing

installations.

Prior to Installation for All MethodsPrior to beginning the installation process with any installation

method, several tasks must be completed:

� Creation of the Sybase login for the ASE host

• Login to the host with the Sybase login

� Verification of sufficient disk space to install ASE

• Ensure the Sybase login owns or has sufficient permissions

to read, write, and control any file systems, virtual devices,

or raw devices to be allocated to ASE.

� Verification of sufficient free memory to install ASE

• Remember to account for the memory utilization of other

tasks on the server when analyzing the available memory for

ASE. Today, many IT shops employ consolidated hosts,

where many instances of ASE exist together or with other

applications.

� Verification of the existence of sufficient shared memory for

ASE

� Verification that the host server’s CPU meets or exceeds the rec-

ommended clock speed to run ASE

� Verification that the host’s operating system is supported by

Sybase

� Verification that the operating system is installed with the appro-

priate patches or service packs

� Acquisition of a copy of the ASE installation software package

� The initial installation of ASE will require a Java Runtime

Environment.

• During the first installation, the setup utility will unpack the

Sybase ASE software. After this stage, it will be possible to

install ASE without the use of the setup utility.

� Verification that the network protocol is supported by ASE, such

as TCP or Named Pipes.

324 | Chapter 12: Installation of ASE Servers

Page 356: New.features.guide.to.Sybase.ase.15

� Verification that environment variables are set up, either manu-

ally or with scripts such as C:\%SYBASE%\SYBASE.bat or

$SYBASE/SYBASE.csh

Some items are optional for the installation process of ASE:

� A running installation of Sybase Asset Management, SySAM 2.0

� A license for the ASE server

• Licenses are optional at installation time. Please see Chapter

11 for more information on license considerations for ASE

15.

� Licenses for advanced ASE features, such as

ASE_PARTITIONS

Installation with Resource FilesAccording to many industry contacts, installation of ASE with the

use of a resource file is a common method of installation for environ-

ments where database administrators are required to support many

ASE servers. As such, this chapter covers resource file installation

first, before continuing with installations through the graphical server

build utility.

The resource file installation method provides the database

administrator three main advantages:

� The ability to quickly create an ASE server, Backup Server,

Monitor Server, XP Server, or Job Scheduler

� The ability to duplicate the same installation many times with

similar configurations

� Less time consuming than the question and answer format of the

srvbuild or sqlsrvr.exe installation

Notes for Resource File Installation of ASE

To accomplish installation of ASE with a resource file, first locate

the resource file template.

Windows:

%SYBASE%\%SYBASE_ASE%\*.res

Chapter 12: Installation of ASE Servers | 325

Page 357: New.features.guide.to.Sybase.ase.15

Unix:

$SYBASE/$SYBASE_ASE/*.res

Additionally, locate the executable to launch the resource file

installation.

For Windows environments, the executable is:

C:\%SYBASE%\%SYBASE_ASE%\bin\sybatch.exe

The Windows executable is launched as follows:

C:\sybase\ASE-15_0\bin>sybatch -r %SYBASE%\%SYBASE_ASE%\myserver.res

For Unix-based systems, the executable is:

$SYBASE/$SYBASE_ASE/srvbuildres

The Unix executable is launched as follows:

$SYBASE/$SYBASE_ASE/bin/srvbuldres –r $SYBASE/$SYBASE_ASE/myserver.res

Note: $SYBASE/$SYBASE_ASE/ is the default location for resource

files on Unix-based hosts, while %SYBASE%/%SYBASE_ASE% is the

default location for resource files on Windows systems. The files named

“myserver.res” are the resource files for ASE installation.

Upon launch, ASE will begin the installation process by creating an

entry for the new server in the interfaces file, and continue with

building the master device. For this example, the resource file shown

in Exhibit 1 (which follows) was employed.

Output from sybatch resource file build:

C:\sybase\ASE-15_0\bin>sybatch -r %SYBASE%\%SYBASE_ASE%\myserver.res

Running task: update Sybase Server entry in interfaces file.

Task succeeded: update Sybase Server entry in interfaces file.

Running task: create the master device.

Building the master device

.........Done

Task succeeded: create the master device.

Running task: update Sybase Server entry in registry.

Task succeeded: update Sybase Server entry in registry.

Running task: start the Sybase Server.

waiting for server 'SYBASE_BRIAN' to boot...

waiting for server 'SYBASE_BRIAN' to boot...

waiting for server 'SYBASE_BRIAN' to boot...

Task succeeded: start the Sybase Server.

326 | Chapter 12: Installation of ASE Servers

Page 358: New.features.guide.to.Sybase.ase.15

Running task: create the sybsystemprocs database.

sybsystemprocs database created.

Task succeeded: create the sybsystemprocs database.

Running task: install system stored procedures.

Installing system stored procedures : 10% complete...

Installing system stored procedures : 20% complete...

Installing system stored procedures : 30% complete...

Installing system stored procedures : 40% complete...

Installing system stored procedures : 50% complete...

Installing system stored procedures : 60% complete...

Installing system stored procedures : 70% complete...

Installing system stored procedures : 80% complete...

Installing system stored procedures : 90% complete...

Installing system stored procedures : 100% complete...

Task succeeded: install system stored procedures.

Running task: set permissions for the 'model' database.

Task succeeded: set permissions for the 'model' database.

Running task: set local Adaptive Server name.

Task succeeded: set local Adaptive Server name.

Running task: set the XP Server for the Adaptive Server.

Task succeeded: set the XP Server for the Adaptive Server.

Running task: update XP Server entry in registry.

Task succeeded: update XP Server entry in registry.

Running task: set the default character set and/or default sort order for the

Adaptive Server.

Setting the default character set to cp850

Sort order 'binary' has already been installed.

Character set 'cp850' is already the default.

Sort order 'binary' is already the default.

Task succeeded: set the default character set and/or default sort order for the

Adaptive Server.

Running task: set the default language.

Setting the default language to us_english

Language 'us_english' is already the default.

Task succeeded: set the default language.

Running task: install sybsystemdb database.

sybsystemdb database extended.

Task succeeded: install sybsystemdb database.

Running task: shutdown the Sybase Server.

Waiting 15 seconds for the operating system to reclaim resources before rebooting.

Task succeeded: shutdown the Sybase Server.

Running task: start the Sybase Server.

waiting for server 'SYBASE_BRIAN' to boot...

waiting for server 'SYBASE_BRIAN' to boot...

Task succeeded: start the Sybase Server.

Chapter 12: Installation of ASE Servers | 327

Page 359: New.features.guide.to.Sybase.ase.15

Configuration completed successfully.

Exiting.

The log file for this session is 'C:\sybase\ASE-15_0\init\logs\log1107.005'.

After the resource file installation is complete, a basic ASE server,

Backup Server, and XP Server are created based upon the contents of

the example resource file. Additionally, the resource file installer

application creates a log file should the installation need to be

reviewed for errors. The location of this log file is included as the last

line of the installer’s output. At this point, the installation is techni-

cally complete; however, the needs of the database administrator and

the business will not likely be met by a base installation of ASE. The

database administrator should then take steps to reconfigure ASE to

match the needs of the environment, add disk resources to the ASE

server, and create and load databases.

Exhibit 1: Sample contents of a resource file

#

# --- This file was generated by Sybase InstallShield Installer ---

#

sybinit.boot_directory: C:\sybase

sybinit.release_directory: C:\sybase

sqlsrv.do_add_server: yes

sqlsrv.network_hostname_list: BRIAN

sqlsrv.network_port_list: 2125

sqlsrv.network_protocol_list: tcp

sqlsrv.notes:

sqlsrv.connect_retry_delay_time: 5

sqlsrv.connect_retry_count: 5

sqlsrv.new_config: yes

#

sqlsrv.server_name: SYBASE_BRIAN

sqlsrv.sa_password:

sqlsrv.sa_login: sa

sqlsrv.server_page_size: 2k

#

# --- Set up master ----

#

sqlsrv.master_device_physical_name: C:\sybase\data\master2.dat

sqlsrv.master_device_size: 200

sqlsrv.master_db_size: 150

sqlsrv.disk_mirror_name:

#

# --- Set up sybsystemprocs ----

328 | Chapter 12: Installation of ASE Servers

Page 360: New.features.guide.to.Sybase.ase.15

#

sqlsrv.do_create_sybsystemprocs_device: yes

sqlsrv.sybsystemprocs_device_physical_name: C:\sybase\data\sysprocs2.dat

sqlsrv.sybsystemprocs_device_size: 200

sqlsrv.sybsystemprocs_db_size: 200

sqlsrv.sybsystemprocs_device_logical_name: sysprocsdev

#

# --- Set up sybsystemdb ----

#

sqlsrv.do_create_sybsystemdb: yes

sqlsrv.do_create_sybsystemdb_db_device: yes

sqlsrv.sybsystemdb_db_device_physical_name: C:\sybase\data\sybsysdb2.dat

sqlsrv.sybsystemdb_db_device_physical_size: 3

sqlsrv.sybsystemdb_db_size: 3

sqlsrv.sybsystemdb_db_device_logical_name: sybsystemdb

#

sqlsrv.errorlog: C:\sybase\ASE-15_0\install\

SYBASE_BRIAN.log

sqlsrv.sort_order: binary

sqlsrv.default_characterset: cp850

sqlsrv.default_language: us_english

#

sqlsrv.preupgrade_succeeded: no

sqlsrv.network_name_alias_list:

sqlsrv.resword_conflict: 0

sqlsrv.resword_done: no

sqlsrv.do_upgrade: no

sqlsrv.characterset_install_list:

sqlsrv.characterset_remove_list:

sqlsrv.language_install_list:

sqlsrv.language_remove_list:

sqlsrv.shared_memory_directory:

sqlsrv.addl_cmdline_parameters:

sqlsrv.eventlog: yes

sqlsrv.atr_name_shutdown_required: yes

sqlsrv.atr_name_qinstall: no

#

sybinit.charset: cp850

sybinit.language: us_english

sybinit.resource_file:

sybinit.log_file:

sybinit.product: sqlsrv

#

sqlsrv.default_backup_server: SYBASE_BRIAN_BS

Chapter 12: Installation of ASE Servers | 329

Page 361: New.features.guide.to.Sybase.ase.15

Installation of ASE Components with a

Resource File

In a manner similar to the resource file installation for ASE, compo-

nents such as the Job Scheduler can be installed with a resource file.

Below, we have a resource file set up to perform the task of Job

Scheduler installation, followed by the Windows sybatch execution

to launch the resource file server builder. Note the resource file

installer program is the same program regardless of whether the data-

base administrator is installing ASE or an ASE component.

#

# --- This file was generated by Sybase InstallShield Installer ---

#

# Creating a Job Scheduler and Self Management

#

sybinit.boot_directory: C:\sybase

sybinit.release_directory: C:\sybase

sybinit.product: js

sqlsrv.server_name: SYBASE_BRIAN

sqlsrv.sa_login: sa

sqlsrv.sa_password:

js.do_add_job_scheduler: yes

js.job_scheduler_agent_name: SYBASE_BRIAN_JSAGENT

js.network_port_list: 4905

js.network_hostname_list: BRIAN

js.network_protocol_list: tcp

js.do_create_sybmgmtdb: yes

js.sybmgmtdb_device_physical_name:

C:\sybase\data\sybmgmtdb2.dat

js.sybmgmtdb_device_logical_name: sybmgmtdev

js.sybmgmtdb_device_size: 75

js.sybmgmtdb_db_size: 75

js.do_add_self_management: yes

js.self_management_login: sa

js.self_management_password:

The Windows executable is launched as follows:

C:\sybase\ASE-15_0\bin>sybatch -r %SYBASE%\%SYBASE_ASE%\js.res

For Unix environments, launch the executable as follows:

$SYBASE/$SYBASE_ASE/bin/srvbuldres –r $SYBASE/$SYBASE_ASE/js.res

330 | Chapter 12: Installation of ASE Servers

Page 362: New.features.guide.to.Sybase.ase.15

Output from a successful installation of the Job Scheduler ASE com-

ponent will look something like the following:

C:\sybase\ASE-15_0\bin>sybatch -r %SYBASE%\%SYBASE_ASE%\js.res

Running task: Update Job Scheduler Agent entry in interfaces file.

Task succeeded: Update Job Scheduler Agent entry in interfaces file.

Running task: Create Sybase management database.

Created Sybase management database

Task succeeded: Create Sybase management database.

Running task: Install Sybase management stored procedures.

Installing Sybase management stored procedures : 10% complete...

Installing Sybase management stored procedures : 20% complete...

Installing Sybase management stored procedures : 30% complete...

Installing Sybase management stored procedures : 40% complete...

Installing Sybase management stored procedures : 50% complete...

Installing Sybase management stored procedures : 60% complete...

Installing Sybase management stored procedures : 70% complete...

Installing Sybase management stored procedures : 80% complete...

Installing Sybase management stored procedures : 90% complete...

Installing Sybase management stored procedures : 100% complete...

Task succeeded: Install Sybase management stored procedures.

Running task: Install Job Scheduler stored procedures templates.

Task succeeded: Install Job Scheduler stored procedures templates.

Running task: Install Job Scheduler XML templates.

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybBackupDbToDiskTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybBackupLogToDiskTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybDeleteStatsTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybRebuildIndexTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybRebuildTableTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybReclaimIndexSpaceTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybReclaimTableSpaceTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybReconfLocksTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybReconfMdCacheTemplate.xml

.Done

Chapter 12: Installation of ASE Servers | 331

Page 363: New.features.guide.to.Sybase.ase.15

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybReconfUsrConnsTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybUpdateStatsTemplate.xml

.Done

C:\sybase\ASE-15_0\jobscheduler\Templates\xml\en\SybSvrUpdateStatsTemplate.xml

.Done

Task succeeded: Install Job Scheduler XML templates.

Running task: Set Job Scheduler Agent name.

Task succeeded: Set Job Scheduler Agent name.

Running task: Enable Job Scheduler.

Exiting.

The log file for this session is 'C:\sybase\ASE-15_0\init\logs\log1107.006'.

GUI Installation Method with srvbuild ExecutableAfter the Sybase software is downloaded from the Sybase Product

Download Center (SPDC) or the installation CD is mounted to the

system, the Sybase ASE software will need to be unpacked. Locate

the setup executable for ASE and run it.

Installation with the srvbuild or setup utility with ASE 15

launches an InstallShield Wizard to accomplish the installation pro-

cess. This installation method, in the author’s opinion, is the most

time-consuming option.

On all operating environments, a similar look and feel will exist

with the ASE 15 Installer. With the Sybase Installer, the database

administrator traverses through a series of GUI screens, participating

in a question and answer session with the common interface.

The following preinstallation step is necessary for Unix

environments:

For Unix environments, the GUI installation assumes the data-

base administrator has the X-Windows environment set up.

Additionally, the $DISPLAY environment variable must be set up to

direct the X-Windows session to the appropriate client session. This

involves locating the client’s IP address and setting the $DISPLAY

environment variable to reflect the client’s IP address.

332 | Chapter 12: Installation of ASE Servers

Page 364: New.features.guide.to.Sybase.ase.15

One method to obtain the IP address is to first open a DOS session on

the Windows client:

Then obtain the IP address with the ipconfig command:

Within the context of an X-Windows session, set the $DISPLAY

environment variable to reflect the client’s IP address, and append

“:0.0” to the end of the IP address:

UNIXhost:~ 7> setenv DISPLAY 192.68.0.101:0.0

Step 1: For Windows, run the setup.exe executable from the direc-

tory containing the Sybase ASE 15 software. Click OK once the path

and executable are designated.

Chapter 12: Installation of ASE Servers | 333

Figure 12-1

Figure 12-2

Figure 12-3

Page 365: New.features.guide.to.Sybase.ase.15

For Windows systems, the following message may appear depending

on the security settings enabled. Click OK.

The InstallShield application is launched.

For Unix:

Similar to the setup.exe executable in Windows environments, the

Unix installation executable will launch a GUI-based Installer appli-

cation. The prompt to launch the Unix installer is as follows:

$SYBASE/$SYBASE_ASE/bin/srvbuild

Step 2: At this stage of the installation process, it is recommended to

shut down all Sybase products on the host server. Click Next on the

Welcome screen to begin the installation.

334 | Chapter 12: Installation of ASE Servers

Figure 12-4

Figure 12-5

Page 366: New.features.guide.to.Sybase.ase.15

Step 3: Select your country of origin, and accept the license agree-

ment by clicking on the “I agree” radio button. Click Next.

Chapter 12: Installation of ASE Servers | 335

Figure 12-6

Figure 12-7

Page 367: New.features.guide.to.Sybase.ase.15

Step 4: Pick the appropriate location for the unpacked Sybase bina-

ries, and the $SYBASE or %SYBASE% directory, then click Next.

If the chosen Sybase directory does not exist, click Yes, and the

Sybase installer will create the designated directory.

Step 5: Select the appropriate installation type. For most installa-

tions, the “Typical” installation is sufficient. This book will proceed

with the “Custom” installation to demonstrate the interactive menu

system for all possible installation items. After selecting the “Cus-

tom” radio button, click Next.

336 | Chapter 12: Installation of ASE Servers

Figure 12-8

Figure 12-9

Page 368: New.features.guide.to.Sybase.ase.15

Step 6: Since this installation is custom, the GUI will direct the data-

base administrator to a preselected list of installation options. Check

items to designate products for installation. After products are

selected, click Next. For this installation, the Job Scheduler Tem-

plates and Utilities are added to the installation.

Chapter 12: Installation of ASE Servers | 337

Figure 12-10

Figure 12-11

Page 369: New.features.guide.to.Sybase.ase.15

Step 7: The installation GUI will check system information at this

step.

Step 8: After system information is verified, review and confirm the

software to be installed by the Installer program. At this stage, the

installation contents could be modified by clicking the Back button.

338 | Chapter 12: Installation of ASE Servers

Figure 12-12

Figure 12-13

Page 370: New.features.guide.to.Sybase.ase.15

Step 9: On the same screen, scroll to the bottom to verify how much

physical disk will be utilized by the unpacked Sybase ASE software.

For this installation example, the Installer program indicates 929.6

MB of disk will be utilized by the Sybase software alone. Click Next

to confirm the installed items.

Step 10: The GUI Installer will then begin the process of extracting

the Sybase software.

Chapter 12: Installation of ASE Servers | 339

Figure 12-14

Figure 12-15

Page 371: New.features.guide.to.Sybase.ase.15

Step 11: After the Sybase software extraction is complete, a confir-

mation message is displayed. Click Next to proceed.

At this stage of the installation of the ASE 15 software, the

installation process could be aborted if an alternative installation

method is desired. Reasons to abort the installation process at this

stage could be a need to build an ASE server with a resource file, or

with the creation of a server using the dataserver binary.

This stage may also be a good point to stop in the installation

process to perform pre-upgrade validations against pre-ASE 15

installations.

Step 12: Next, the installation asks for information on how to obtain

ASE licenses. On this screen, the host name and port number of the

Sybase Software Asset Management is placed into the GUI by the

Installer. SySAM was covered in Chapter 11, so at this point we will

not make any license manager selections. This will be an example of

the “Try and Buy” installation.

For reference, if “Yes” is selected, have the host name and port

number available for the host of the Sybase Software Asset

Management.

340 | Chapter 12: Installation of ASE Servers

Figure 12-16

Page 372: New.features.guide.to.Sybase.ase.15

If you choose “No,” this screen will gray out the host and port num-

ber selections. Click Next to proceed.

Chapter 12: Installation of ASE Servers | 341

Figure 12-17

Figure 12-18

Page 373: New.features.guide.to.Sybase.ase.15

Step 13: An information message will be displayed, reminding you

to download and install the license file after this installation. Addi-

tionally, the SySAM host and port number can be supplied at a later

time. The different license approaches are discussed in Chapter 11.

Step 14: The next screen will allow the configuration of the email

alert mechanism. This is for the publication of license information to

a designated email address. Enter the SMTP email host name along

with the SMTP server’s port number, the sender’s and recipient’s

email addresses, and the severity level for messages generated by

SySAM. Click Next when complete.

342 | Chapter 12: Installation of ASE Servers

Figure 12-19

Figure 12-20

Page 374: New.features.guide.to.Sybase.ase.15

Step 15: Select the license type to install, and click Next.

Step 16: With the ASE 15 Installer, the database administrator can

configure ASE and other ASE-related products through the installa-

tion process. Check any products that need to be configured during

the installation process and click Next.

Chapter 12: Installation of ASE Servers | 343

Figure 12-21

Figure 12-22

Page 375: New.features.guide.to.Sybase.ase.15

Step 17: The next screen asks for which items need to be configured

during the installation process. Click any items to be configured

where the default values are not acceptable, and then click Next.

Step 18: Next, select configuration options for ASE, including the

server name, page size, errorlog location, and master device size.

Remember, the server’s page size cannot be changed after the server

is built. Choose a value for the server’s page size carefully. As a gen-

eral guideline, a page size of 2 K is ideal for OLTP-based systems,

while larger page sizes are preferable for DSS-based systems. The

page size of 2 K is the default, and is recommended to be selected if

the system’s usage type is unknown. Click Next when complete.

344 | Chapter 12: Installation of ASE Servers

Figure 12-23

Page 376: New.features.guide.to.Sybase.ase.15

Step 19: At this stage, the entries for the ASE server are complete.

This exercise continues with the installation steps for ASE’s Backup

Server. On this screen, accept or modify entries for the ASE Backup

Server. Click Next when complete.

Chapter 12: Installation of ASE Servers | 345

Figure 12-24

Figure 12-25

Page 377: New.features.guide.to.Sybase.ase.15

Step 20: Accept or modify entries for the Sybase Monitor Server,

and click Next when complete.

Step 21: At this stage, the entries for the Backup Server are com-

plete. This example continues with the installation selections for the

XP Server. Select a port number for the XP Server or accept the

default and click Next.

346 | Chapter 12: Installation of ASE Servers

Figure 12-26

Page 378: New.features.guide.to.Sybase.ase.15

Step 22: At this stage, the installation selections for ASE, Backup

Server, and XP Server are complete. This example continues with the

option selections for installing the Job Scheduler Agent. On the next

screen, accept the default entries for the Job Scheduler Agent, or

modify the defaults and then click Next.

Chapter 12: Installation of ASE Servers | 347

Figure 12-28

Figure 12-27

Page 379: New.features.guide.to.Sybase.ase.15

Step 23: Enter the login ID to be used for Self Management of ASE.

Click Next to select the default of sa, or enter a different login and

click Next.

Step 24: For the installation of the Sybase Unified Agent, select the

appropriate adapter type and configuration settings for the Sybase

Unified Agent interface. Click Next when complete.

348 | Chapter 12: Installation of ASE Servers

Figure 12-29

Page 380: New.features.guide.to.Sybase.ase.15

Step 25: Select the Security Login Modules for the Unified Agent,

then click Next when complete.

Chapter 12: Installation of ASE Servers | 349

Figure 12-31

Figure 12-30

Page 381: New.features.guide.to.Sybase.ase.15

Step 26: At this stage, all entries for ASE, ASE Backup Server, XP

Server, Job Scheduler, and the Unified Agent are complete. Take a

moment to review the settings from the options selected through the

GUI screens. If the entries presented by the New Servers Summary

screen are correct, click Next to proceed with the server specifica-

tions selected.

Step 27: The Installer will then proceed to build the master device

and commence the installation of all selected products.

350 | Chapter 12: Installation of ASE Servers

Figure 12-32

Page 382: New.features.guide.to.Sybase.ase.15

Step 28: Upon success, the message in Figure 12-34 will display.

Chapter 12: Installation of ASE Servers | 351

Figure 12-34

Figure 12-33

Page 383: New.features.guide.to.Sybase.ase.15

Upon failure of any product, a message similar to the one shown in

Figure 12-35 will display.

If no errors occurred, click the radio button to restart the computer for

Windows environments and click OK. For Unix-based environments,

it is not necessary to reboot the host server.

Installation with the Dataserver ExecutableA third method of installation is the dataserver executable method.

This method is perhaps the most “brute force” installation for ASE.

The installation involves invoking the dataserver executable with

several parameters to tell ASE how to build a very basic and incom-

plete server. Upon completion of the installation with the dataserver

executable, the result is a very basic server.

A valid reason to choose this type of installation is the independ-

ence from X-Windows for various environments. Independence from

X-Windows may be necessary due to an inability to set up the envi-

ronment. You may also choose this if the installation needs to be

performed via a remote connection. If a remote connection is slow,

the installation process with X-Windows can take considerable time.

352 | Chapter 12: Installation of ASE Servers

Figure 12-35

Page 384: New.features.guide.to.Sybase.ase.15

To invoke the ASE installation for Unix environments:

dataserver –b200m -d /Sybase/master.dat –e/Sybase/errorlogs/errorlog/

SYBASE.log –sSYBASE –z2k

For Windows environments:

Sqlsrvr –b200m –d c:\Sybase\master.dat –e c:\Sybase\errorlogs\errorlog\

SYBASE.log –sSYBASE –z2k

After the dataserver executable is executed, a master device and an

errorlog are present, but not much else. In order to launch ASE after

this step, the following steps are necessary:

1. Create a RUN_SERVER file with the location of all ASE entries,

such as the errorlog, configuration file, server name, and master

device location.

RUN_SERVER for Unix hosts:

#!/bin/sh

#

# ASE page size (KB): 2k

# Master device path: /dev/vx/rdsk/sandg/master

# Error log path: /sybase/logs/errorlog_BRIAN_TEST

# Configuration file path:

/sybase/sybase15/ASE-15_0/BRIAN_TEST.cfg

# Directory for shared memory files: /sybase/sybase15/ASE-15_0

# Adaptive Server name: BRIAN_TEST

#

#LM_LICENSE_FILE="9999@hostname:9999@hostname”

SYBASE_LICENSE_FILE="9999@hostname:9999@hostname:4030@hostname"

export SYBASE_LICENSE_FILE

# setenv SYBASE_LICENSE_FILE 9999@hostname:9999@hostname:4030@hostname

/sybase/sybase15/ASE-15_0/bin/dataserver \

-sBRIAN_TEST \

-d/sybase/master.dat \

-e/sybase/logs/BRIAN_TEST.log \

-c/sybase/sybase15/ASE-15_0/BRIAN_TEST.cfg \

-M/sybase/sybase15/ASE-15_0 \

RUN_SERVER for Windows hosts:

rem

rem Adaptive Server Information:

rem name: SYBASE

rem master device: C:\sybase\data\master3.dat

rem server page size: 2048

Chapter 12: Installation of ASE Servers | 353

Page 385: New.features.guide.to.Sybase.ase.15

rem master device size: 200

rem errorlog: C:\sybase\errorlog\SYBASE.log

rem interfaces: C:\sybase\ini

rem

C:\sybase\ASE-15_0\bin\sqlsrvr.exe -dC:\sybase\data\master3.dat

-sSYBASE_150 -eC:\sybase\errorlog\SYBASE.log -iC:\sybase\ini

-MC:\sybase\ASE-15_0

For this installation example, a RUN_SERVER file was copied

from a working installation of ASE. Then the entries were modi-

fied to contain the errorlog, master device, and configuration file

locations for the new server.

Caution: If RUN_SERVER entries are not modified, it is possible this

installation method could cause problems for the copied ASE installation,

especially within environments where more than one ASE installation

shares the same host.

Recommendation: By default, the ASE installer places the -s line as the

last entry in the RUN_SERVER file for some environments. Move the -s

entry up a few lines in the RUN_SERVER file to where it is the first line

after the dataserver entry. Then process-level scans, such as those exe-

cuted by the showserver executable, display the server name when

executed.

2. Create the configuration file in the location specified by the

RUN_SERVER file.

3. Manually add an entry to the interfaces file for the new server.

Unix:

Add an entry to the $SYBASE\interfaces file. Assign a port num-

ber that is not already in use.

SYBASE

master tcp ether BRIAN 2500

query tcp ether BRIAN 2500

Windows:

Add an entry to the C:%SYBASE%ini\sql.ini (interfaces) file.

Assign a port number that is not already in use.

[SYBASE]

master=NLWNSCK,BRIAN,2500

query=NLWNSCK,BRIAN,2500

354 | Chapter 12: Installation of ASE Servers

Page 386: New.features.guide.to.Sybase.ase.15

4. Start ASE.

Unix:

$SYBASE/$SYBASE_ASE/install ./startserver -f RUN_SYBASE

Windows:

C:%SYBASE%\%SYBASE_ASE%\install>startsrv.exe -f RUN_SYBASE.bat

5. When the errorlog is reviewed at this stage, errors will likely be

present. The errors are mostly attributed to the absence of one

important device and database that will be missing: the

sybsystemprocs device and database. Create them as follows:

Unix:

disk init name="sysprocsdev",

physname = "/dev/vx/rdsk/rootdg/sysprocsdev",

size=128000

go

create database sybsystemprocs on sysprocsdev = 250

go

Windows:

disk init name = "sysprocsdev",

Physname = "C:\sybase\sysprocsdev.dat",

Size = 128000

go

create database sybsystemprocs on sysprocsdev = 250

go

6. At this stage, a server is built and can be accessed. However, the

installation scripts will need to be executed to fully build the

server and make it usable.

From the $SYBASE/$SYBASE_ASE/scripts directory,

install system procedures with the following installation scripts.

Unix:

host:~/Sybase/ASE-15_0/scripts > isql -Usa -SBRIAN_TEST

-i./installmaster -o./installmaster.out

host:~/Sybase/ASE-15_0/scripts > isql -Usa -SBRIAN_TEST

-i./installmodel -o./installmodel.out

host:~/Sybase/ASE-15_0/scripts > isql -Usa -SBRIAN_TEST

-i./installupgrade -o./installupgrade.out

host:~/Sybase/ASE-15_0/scripts > isql -Usa -SBRIAN_TEST

-i./instmsgs.ebf -o./instmsgs.ebf.out

Chapter 12: Installation of ASE Servers | 355

Page 387: New.features.guide.to.Sybase.ase.15

Windows:

C:\sybase\ASE-15_0\scripts>isql -Usa -SSYBASE -iinstmstr

-oinstmstr.out

C:\sybase\ASE-15_0\scripts>isql -Usa -SSYBASE -iinstmodl

-oinstmodl.out

C:\sybase\ASE-15_0\scripts>isql -Usa -SSYBASE -iinsupgrd

-oinsupgrd.out

C:\sybase\ASE-15_0\scripts>isql -Usa -SSYBASE -iinstmsgs.ebf

-oinstmsgs.out

Upon execution of the scripts in step 6, the ASE installation is com-

plete. A very basic but complete ASE server will now be available to

add devices and databases.

SummaryThis chapter has touched upon three different yet common installa-

tion methods for ASE and the supporting components. Each of the

installation methods has advantages and disadvantages that depend

upon the level of expertise and confidence of the database adminis-

trator. Additionally, each installation method has appropriate areas of

application that can be selected based upon the needs of the individ-

ual server installation requirements. As indicated in the beginning of

this chapter, a general trend toward resource file installations is

expected as experience level and the number of servers supported by

a database administrator or DBA team increase.

356 | Chapter 12: Installation of ASE Servers

Page 388: New.features.guide.to.Sybase.ase.15

Part II

Pre-15 ImprovementsPre-15 Improvements

357

Page 389: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 390: New.features.guide.to.Sybase.ase.15

Chapter 13

Multiple TemporaryDatabases

Multiple temporary databases were first introduced in ASE 12.5.0.3.

Due to the limited exposure offered on this subject and the impacts of

ASE 15 on tempdb, this chapter addresses how multiple temporary

databases may benefit your organization. This chapter answers the

question “Why use multiple temporary databases?”

IntroductionWhether you are doing ad-hoc queries or full applications in ASE,

the tempdb is an integral component of your environment. ASE-gen-

erated temporary worktables used for resolving order by and group

by clauses are always created in tempdb. In addition, your application

might create temporary tables in order to provide a facility to manage

and make subsets of data available for improved application perfor-

mance. Multiple temporary databases allow the DBA to separate the

application workloads by assigning each application to a specific

temporary database. The temporary database(s) to which the applica-

tion is assigned may be shared by several applications. Even if no

application is assigned to a particular temporary database or tempo-

rary database group, if multiple temporary databases do exist, users

will be delegated to only one temporary database for their current

session.

Chapter 13: Multiple Temporary Databases | 359

Page 391: New.features.guide.to.Sybase.ase.15

In this chapter, we discuss the reasons for considering multiple

temporary databases. At the end of the chapter, a sample implementa-

tion is presented.

Purposes for Multiple Temporary DatabasesThere are two major purposes for choosing to utilize multiple tempo-

rary databases. The main purpose is to provide a method of

addressing lock contention on system tables when many concurrent

#temp tables are being created and dropped quickly and frequently

within an application. This has been a major problem with systems

where application performance is a critical issue. The second purpose

of multiple temporary databases is to provide a level of load balanc-

ing and resource sharing methods that cannot be achieved with a

single tempdb or through the use of resource limits.

Before proceeding with implementing temporary databases, you

need to clarify the business or application needs that will be

addressed. A method for evaluating and implementing multiple tem-

porary databases is presented later in this chapter.

Prior to ASE 15

The tempdb database has been an essential component of ASE since

it was first developed in 1984. The purpose of the database is to man-

age temporary subsets of data that only need to persist for the SQL

statement, a portion of the statement, the entire user session, or across

user sessions. Any data that is stored in tempdb is considered to be

discardable in the event ASE is shut down for any reason. As you

may know, this is the only database that is refreshed each time ASE

is started.

Each time the ASE server is restarted, the model database is cop-

ied and used to create tempdb just like any other database is initially

created. This behavior has existed since version 3 of Sybase. The

tempdb database is created prior to any user-defined databases being

recovered during system startup.

360 | Chapter 13: Multiple Temporary Databases

Page 392: New.features.guide.to.Sybase.ase.15

With ASE 15

In ASE 15, all temporary databases are rebuilt at ASE startup just

like the default temporary database — tempdb. Unless the recovery

order is modified using sp_dbrecovery_order, all user-defined tem-

porary databases are created in the order of their database ID, just as

if they were going through the normal server recovery process. If a

user-defined database has a lower database ID, it will be recovered

prior to the temporary database being recovered. Temporary data-

bases are not necessarily recovered before a user-defined database.

There is a limit of 512 temporary databases that can be created in

each ASE server environment.

System Catalog Changes

In ASE 15, table names can now be 255 characters in length. This

change may affect the size of the sysobjects, sysindex, and other sys-

tem tables within any database. For tempdb, the effect may require

additional space to be allocated to the system segment if you are

using a lot of temporary tables in your application. In most environ-

ments, this will not be an issue. Only when you create a lot of

temporary tables that all use full length table names and full length

column names might you see a noticeable difference. If the majority

of your applications use table and column names of less than 50 char-

acters, this should not be of concern in your environment. As a side

note, only 238 characters of the name you assign will be used to iden-

tify the table in tempdb. An additional 17 characters are appended to

the name to uniquely identify the table.

directio Support

The directio parameter is used to bypass writing updated pages to the

operating system’s buffer cache, thereby giving you raw partition

performance even though the temporary database devices are defined

in the file system. The directio option is only effective in those oper-

ating systems (OS) where direct I/O is supported and has been

activated by the OS administrator. Currently, it is supported only in

Unix and Linux operating systems. directio is a static option and

requires a server reboot in order for it to take effect. It can be set at

the device level with disk init, disk reinit, or sp_deviceattr. If you have

defined your devices for tempdb or other temporary databases with

Chapter 13: Multiple Temporary Databases | 361

Page 393: New.features.guide.to.Sybase.ase.15

dsync=false, the use of the directio option is available since these two

options are mutually exclusive (both of these options cannot be set to

true). With directio=true, the write to buffer cache is bypassed and the

write to disk for the page is forced. For temporary database devices,

this is not the behavior that is most efficient. With temporary data-

bases, the need to guarantee the write to disk is unnecessary as

recoverability is not an issue. Therefore, you will want to set

directio=false. Using the directio=false option and dsync=false will

provide improved performance for those databases where

recoverability is not an issue and where the server has I/O contention

at the disk level. You can tell if directio is active once the device

option has been set to true or false by the message written in the ASE

errorlog at startup when the device is initialized. The following

example indicates that dsync was set to false and directio was set to

true.

1> sp_deviceattr tempdb_d1, directio, true

2> go

'directio' attribute of device 'tempdb_d1' turned 'on'. Restart Adaptive

Server for the change to take effect.

(return status = 0)

Excerpt from errorlog:

Initializing virtual device 4, '/tempdb_dev/tempdb_d1' with dsync 'off'.

Virtual device 4 started using asynchronous (with O_DIRECT) i/o.

The next example is the errorlog after the device’s directio option has

been set to false with sp_deviceattr.

1> sp_deviceattr tempdb_d1, directio, false

2> go

'directio' attribute of device 'tempdb_d1' turned 'off'. Restart

Adaptive Server for the change to take effect.

(return status = 0)

Excerpt from errorlog:

Initializing virtual device 4, '/tempdb_dev/tempdb_d1' with dsync 'off'.

Virtual device 4 started using asynchronous i/o.

As you can see, there is no way to tell, just by looking at the errorlog,

that the device has directio set to false. Only by looking at the output

from sp_helpdevice can you be sure of the settings.

1> sp_helpdevice tempdb_d1

2> go

362 | Chapter 13: Multiple Temporary Databases

Page 394: New.features.guide.to.Sybase.ase.15

device_name physical_name description

status

cntrltype vdevno vpn_low vpn_high

----------- ---------------------------------

------------------------------------------------------------------------

------

--------- ----------- ----------- -----------

tempdb_d1 /sybase_dev/DCDR04/data/tempdb_d1 special, dsync off, directio

off, physical disk, 50.00 MB, Free: 0.00 MB 2

0 4 0 25599

(1 row affected)

(return status = 0)

Even if sp_helpdevice shows that directio has been set to false, only

the OS administrator can determine if direct I/O is available at the

physical device level.

Therefore, for temporary databases, it is recommended that you

set both dsync=false and directio=false for the devices on which the

databases are allocated. These devices should only be used for their

assigned databases and for only temporary databases. As with any

recommendation, you should run a performance baseline, make your

changes, rerun your performance tests, and then compare the results

to those of your baseline.

Note that with ASE 15, temporary databases no longer write log

pages to disk unless the pages are flushed out of cache, thereby hav-

ing the effect of reducing the impact on the write load.

For more detail on the directio and dsync options, see the chapter

on initializing databases devices in Sybase’s System Administration

Guide.

update statistics

In ASE 15, the update statistics command creates a worktable in

tempdb. The worktable is used to store the statistics for each data par-

tition of a table. Since all tables are now created with a minimum of

one partition, this worktable creation should be accounted for when

resizing tempdb for ASE 15. Although the impact on size is minimal,

if you are running multiple update statistics concurrently against sev-

eral databases, you might want to consider adding additional space

— especially if you are updating the indexes and all column statis-

tics. With the update index or update all statistics options, it is

possible for the temporary worktables to be quite large. The amount

Chapter 13: Multiple Temporary Databases | 363

Page 395: New.features.guide.to.Sybase.ase.15

of additional space will depend on the size of the largest table in each

database.

Insensitive Scrollable Cursors

When an insensitive cursor is defined, ASE copies all of the resulting

rows to a worktable in tempdb. The worktable is used to process

fetch commands so that the original base table is no longer locked.

For more in-depth information on ASE 15 cursors, see Chapter 4,

“Scrollable Cursors.”

Semi-sensitive Scrollable Cursors

When a semi-sensitive cursor is defined, ASE creates a 16 KB cache

for handling the initial set of fetched rows. Once it exceeds the 16

KB cache, a worktable is created in tempdb, but data rows are not

copied to the worktable. The worktable is populated with data rows

as the rows are fetched. When changes are made to rows read from

the worktable, the updates are applied to the base table. In this type of

cursor, the updates are visible via the worktable if the data row is

fetched. Locking on the base table is not released until the last row in

the cursor’s result set is fetched.

Sensitive Scrollable Cursors

With a sensitive cursor, the ASE behavior, as it relates to tempdb

worktable usage, is the same as for a semi-sensitive cursor. Only after

the cursor fills a 16 KB in-memory cache does ASE create a workta-

ble in tempdb. At that time, the worktable starts getting populated

with data rows as the rows are fetched from the base table. When

changes are made to rows read from the worktable, the updates are

applied to the base table. For the sensitive cursor, the updates are vis-

ible via the 16 KB cache and the worktable if the data row is fetched.

Locking on the base table is not released until the last row in the cur-

sor’s result set is fetched.

364 | Chapter 13: Multiple Temporary Databases

Page 396: New.features.guide.to.Sybase.ase.15

How to Decide When to Add a TemporaryDatabase

When considering the use of multiple tempdb databases, you have to

understand the strategies available for their use.

Strategies

With the initial introduction of this feature, the strategies provided

allow for limited flexibility. As the functionality matures, additional

strategies need to be available for use.

A basic strategy for using multiple temporary databases is to

provide the system administrator (“sa” login) with the ability to log

into the ASE server and run system stored procedures — like

sysprocesses and syslogshold against virtual tables — when the main

tempdb database is full. This is probably the first reason a DBA will

use to justify adding a temporary database. When the main tempdb is

full, the “sa” may not be able to determine what process is filling up

tempdb. Without knowing which process to “kill,” the usual correc-

tive measure is to recycle the ASE server, which has further and

perhaps detrimental implications. Once the ASE server is recycled,

there may be limited or no information about what caused the tempdb

to fill up. By creating a tempdb database for “sa” usage only, the

database administrator will be assured of being able to log into the

ASE server and determine the cause of tempdb filling up. This will

allow the database administrator the ability to better determine the

cause of a problem that has affected tempdb and evaluate alternative

solutions other than recycling the ASE server. As a side note, the

MDA tables as discussed in Chapter 9 may be able to provide infor-

mation as to the culprit process that filled up tempdb. Also, with ASE

12.5.2, if the number of user connections has been exceeded and no

additional users can log into the server, a special reserved connection

is kept available by the server for the “sa” to use to log into the

server.

A second strategy deals with having multiple temporary data-

bases for all users to share. This strategy performs some load

balancing by assigning each new user who logs in the next available

temporary database in the group of multiple temporary databases

using a round-robin methodology. In the case where a temporary

Chapter 13: Multiple Temporary Databases | 365

Page 397: New.features.guide.to.Sybase.ase.15

database in the group is full, that database will be bypassed and will

not be used until space has been freed up.

A third strategy addresses separating applications onto different

temporary databases or temporary database groups. The purpose of

this strategy is to provide temporary databases that are not shared

across applications. (i.e., OLTP application users would not share

their temporary database with ad-hoc users). In this manner, search-

intensive applications that heavily exploit temporary space can be

segregated from normal applications. Furthermore, different storage

systems can be used for different purposes. For example, a high-

speed SAN might be used for OLTP users, while ad-hoc temporary

databases could be placed on slower, less expensive storage.

By choosing the proper mix of strategies, a flexible environment

can be created whereby the resources are maximized for the applica-

tions and users utilizing the ASE server.

What Are Your Needs?

As with any feature of Sybase ASE, you have to determine your

needs before you proceed with implementing the functionality. If you

do not properly define your need for multiple temporary databases,

the resulting environment may cause resource management issues.

When determining your needs, the following questions should be

answered:

� What goal am I trying to accomplish by having multiple tempo-

rary databases?

� Who will benefit from this feature?

� Do I have the necessary resources to effectively implement this

feature?

� What are the short-term and long-term implications of using this

feature?

Once you have answered these questions and have determined

your need for multiple temporary databases, the next step is

implementation.

366 | Chapter 13: Multiple Temporary Databases

Page 398: New.features.guide.to.Sybase.ase.15

Implementation StepsThe following steps are intended to ensure that you have properly

installed the feature. These steps are generalized to allow for leeway

in your implementation.

1. Determine whether the temporary databases need to have sepa-

rate data and log segments.

2. Determine the amount of space necessary for each temporary

database that will be created.

3. Define file systems with usable space equivalent to the space

requirements. The use of file systems over raw partitions is

selected since the databases are temporary databases and you will

want the extra performance from the file system buffering. Be

sure to specify the dsync=false ASE option on the disk init

command.

4. Define devices within ASE for each of the database devices.

5. Create the temporary databases using the temporary option of the

create database command (e.g., create temporary database

uc_tempdb_01 …).

6. Using the sp_dbrecovery_order system stored procedure, change

the recovery order of the databases so that the temporary data-

bases are recovered before the application databases.

7. Create temporary database groups and assign the new temporary

databases to the proper groups. If one of the databases is for “sa”

use, do not assign it to any group.

8. If one of the temporary databases is for “sa” use, use the

sp_tempdb system stored procedure to associate the “sa” login to

this database.

9. If desired, use sp_tempdb to associate an application to a tempo-

rary database group.

You are now ready to use multiple temporary databases.

Chapter 13: Multiple Temporary Databases | 367

Page 399: New.features.guide.to.Sybase.ase.15

Determining Available Temporary DatabasesThere are four ways to determine the available temporary databases.

The first method uses the traditional sp_helpdb stored procedure.

The second method uses the new stored procedure sp_tempdb.

The sp_tempdb procedure is used to manage the multiple temporary

database environments. The syntax for sp_tempdb is:

sp_tempdb [

[{ create | drop } , groupname] |

[{ add | remove } , tempdbname, groupname] |

[{ bind, objtype, objname, bindtype, bindobj

[, scope, hardness] } |

{ unbind, objtype, objname [, scope] }] |

[unbindall_db, tempdbname] |

[show [, "all" | "gr" | "db" | "login" | "app" [, name]] |

[who, dbname]

[help]

]

For more details on each of the parameters, consult the Sybase prod-

uct manuals.

The third method uses a new dbcc function that allows a login

with sa role to get a list of available temporary databases. The new

command is dbcc pravailabletempdbs. Since this is a dbcc command,

you will need to specify the traceflag that is appropriate for viewing

the output. If you want the output to display at your terminal, specify

dbcc traceon (3604). Otherwise, the output will be written to the ASE

errorlog. See the example below for a sample of the output from this

new command. Be sure to note that only the database ID is displayed.

Example:

use master

go

dbcc traceon (3604)

go

dbcc pravailabletempdbs

go

368 | Chapter 13: Multiple Temporary Databases

Page 400: New.features.guide.to.Sybase.ase.15

Output:

Available temporary databases are:

Dbid: 2

DBCC execution completed. If DBCC printed error messages, contact

a user with System Administrator (SA) role.

The fourth method is to select the status3 bit from mas-

ter..sysdatabases. If the hexadecimal value is 0x0100 (decimal 256),

the database is a user-created temporary database.

Sample Setup for Temporary Database for “sa”Use Only

� Determine if separate data and log devices are needed.

Since the temporary database is for “sa” use only, it is unlikely

that the database would need to be large or that it will need to

have separate data and log devices. However, it is always a good

policy to create a separate data and log device. For this example,

the database will be created with one data device and one log

device.

� Determine the amount of space needed for each temporary data-

base that will be created.

Given that the “sa” will sometimes need to read large amounts of

data, we will use a 100 MB device for data and a 10 MB device

for log.

� Define raw partitions or file systems with usable space equiva-

lent to the space requirements.

For this example, we will assume that there is enough file system

space in an existing directory for us to create the two ASE

devices.

� Define devices within ASE for each of the database devices.

use master

go

disk init

name='sa_tempdb_data',

physname='/sybase/data/sa_tempdb_data',

size="100m",

dsync=false

Chapter 13: Multiple Temporary Databases | 369

Page 401: New.features.guide.to.Sybase.ase.15

go

disk init

name='sa_tempdb_log',

physname='/sybase/log/sa_tempdb_log',

size="10m",

dsync=false

go

� Create the temporary databases using the create temporary data-

base command. This is a full command in itself and should not

be confused with the create database command.

use master

go

create temporary database sa_tempdb

on sa_tempdb_data = 100

log on sa_tempdb_log = 10

go

� Create temporary database groups and assign the new temporary

databases to the various groups that each needs to be defined to.

The goal of temporary database groups is to provide a method

whereby the DBA can allocate temporary databases based on the

varying needs of different applications that share data on a

server. If one of the temporary databases is for “sa” use only, it

should not be assigned to any groups.

The group “default” is a system-generated group and the data-

base tempdb is automatically a member of the group. Normally

at this point, the DBA would use the sp_tempdb system stored

procedure to define a new temporary database group and assign

the new temporary database to one or more groups. Since the

newly created temporary database — sa_tempdb — is not for use

by all users, it is not added to the “default” group or any other

group. Using the who option of sp_tempdb will display a list of

active sessions that have been assigned to a temporary database.

use master

go

sp_tempdb 'who', 'tempdb'

go

sp_tempdb 'who', 'sa_tempdb'

go

370 | Chapter 13: Multiple Temporary Databases

Page 402: New.features.guide.to.Sybase.ase.15

� Using the sp_dbrecovery_order system stored procedure, change

the recovery order of the databases so that the temporary data-

bases are recovered before the application databases.

use master

go

sp_dbrecovery_order sa_tempdb, 1

go

� Use the sp_tempdb system stored procedure to associate the “sa”

login to this database.

use master

go

sp_tempdb 'bind', 'lg', 'sa', 'db', 'sa_tempdb'

go

At this point, your sa_tempdb temporary database is ready for use by

the “sa” login only.

Other Issues

Dropping Temporary Databases

If is determined that a temporary database is no longer necessary, it

can be dropped. The normal drop database command is used to drop

a temporary database. There are, however, restrictions.

� There can be no bindings to the temporary database at the time

the drop database command is issued. An error will be issued if

an attempt is made to drop a temporary database and it has

binding.

� There can be no sessions actively using the temporary database.

An error will be raised if there are active sessions against the

temporary database. You can use sp_tempdb who, dbname to

determine active sessions using the database. These sessions can

then be terminated or the DBA can wait until no user sessions are

active against the database.

� No columns in temporary tables on the database can refer back to

Java classes in the source database. This will occur when select

into #temp-table is issued and the source table uses Java classes.

Chapter 13: Multiple Temporary Databases | 371

Page 403: New.features.guide.to.Sybase.ase.15

Altering a Temporary Database

A temporary database can be altered. It will have the same restric-

tions and limitations as any other database. If an attempt is made to

alter the model database to a size larger than the smallest temporary

database, the alter on model will fail.

@@tempdb

A new system variable has been defined for use in determining which

temporary database has been assigned to a user’s connection.

@@tempdb returns the name of the temporary database to which the

user’s connection is associated.

SummaryThe multiple temporary databases feature existed prior to ASE 15 but

has been underutilized. The use of this feature should be considered

for applications where contention exists for tempdb. The utilization

of multiple temporary databases allows the DBA to segregate appli-

cations from each other and from the “sa” to allow for tempdb space

and resources to not be overused by any one individual or

application.

When deciding to use additional temporary databases, the DBA

needs to determine the needs of the business and consider the positive

performance impacts additional temporary databases can have on a

database application environment.

372 | Chapter 13: Multiple Temporary Databases

Page 404: New.features.guide.to.Sybase.ase.15

Chapter 14

The MDA Tables

The MDA tables were first introduced with the Sybase ASE 12.5.0.3

release. So why, then, do we include this topic in a book designed to

highlight the new features of ASE 15? There are several reasons.

First, the MDA tables are relatively new to Sybase ASE, so new in

fact, that despite their years of existence, few database administrators

use them due to lack of experience. Second, with the release of ASE

15, the opportunity exists to track the usage of semantic partitions

from the MDA tables. And lastly, of course, with any migration from

one release to another, spotting performance regression as well as

tuning the new release optimally are common exercises; MDA tables

provide the utilities to accomplish this.

What Are the MDA Tables?MDA is an acronym for Monitoring and Diagnostic Access. The

Monitoring and Diagnostic Access “tables” are not really tables —

they are proxy tables on top of native server remote procedure calls

(RPCs). By default, the proxy tables are materialized in the master

database; however, they can be materialized in any database or on

another server, such as a monitoring repository. These proxy tables,

when accessed with Transact-SQL, are created from RPCs that

directly access memory structures when queried. The MDA tables

allow the database administrator to monitor Sybase ASE with Trans-

act-SQL commands.

The MDA tables report information about ASE at a low level.

Unlike the sp_sysmon process, which reports performance

Chapter 14: The MDA Tables | 373

Page 405: New.features.guide.to.Sybase.ase.15

information at the server level, the MDA tables report data at the

query and table level in addition to the server level. Further, the

MDA tables provide information on current activity at the table, pro-

cedure, query, and process levels.

Past SolutionsPrior to the introduction of the MDA tables, much of the information

that can now be extracted from the MDA tables was either impossi-

ble to obtain or difficult to sift through once obtained. Perhaps the

first real detailed performance analysis capabilities for ASE were

implemented using Monitor Server and Historical Server. For exam-

ple, to capture the top 10 worst performing queries, or top n stored

procedures that execute longer than three seconds, the DBA had to

construct Historical Server views similar to the following:

hs_create_view top10_spid_io,

"Process ID", "Value for Sample",

"Login Name", "Value for Sample",

"Current Stmt Start Time", "Value for Sample",

"Current Stmt CPU Time", "Value for Sample",

"Current Stmt Elapsed Time", "Value for Sample",

"Current Stmt Page I/O", "Value for Sample",

"Current Stmt Logical Reads", "Value for Sample",

"Current Stmt Page Writes", "Value for Sample",

"Current Stmt Batch Text", "Value for Sample"

go

hs_create_filter top10_spid_io, "Current Stmt Batch Text", "Value for

Sample", neq, ""

go

hs_create_filter top10_spid_io, "Current Stmt Page I/O", "Value for

Sample", range, low,30000

go

hs_create_filter top10_spid_io, "Current Stmt Logical Reads", "Value

for Sample", top, 10

go

hs_create_view process_procedure_page_io,

"Login Name", "Value for Sample",

"Process ID", "Value for Sample",

"Kernel Process ID", "Value for Sample",

"Procedure Database Name", "Value for Sample",

"Procedure Database ID", "Value for Sample",

374 | Chapter 14: The MDA Tables

Page 406: New.features.guide.to.Sybase.ase.15

"Procedure Name", "Value for Sample",

"Procedure ID", "Value for Sample",

"Procedure Execution Count", "Value for Sample",

"Procedure CPU Time", "Value for Sample",

"Procedure CPU Time", "Avg for Sample",

"Procedure Elapsed Time", "Value for Sample",

"Procedure Elapsed Time", "Avg for Sample",

"Page I/O", "Value for Sample",

"Page Hit Percent", "Value for Sample",

"Logical Page Reads", "Value for Sample",

"Index Logical Reads", "Value for Sample",

"Physical Page Reads", "Value for Sample",

"Index Physical Reads", "Value for Sample",

"Page Writes", "Value for Sample"

go

hs_create_filter process_procedure_page_io, "Procedure Elapsed Time",

"Value for Sample", range, low, 3000

go

hs_create_filter process_procedure_page_io, "Procedure Execution Count",

"Value for Sample", top, 25

go

One of the problems with this approach was that the Monitor Server

sometimes inflicted a substantial load on the ASE server with the

near continuous polling. Secondly, the amount of SQL captured in

the batch was often limited by the parameter max SQL text moni-

tored, which for complicated queries was insufficient unless set so

high that the overhead per connection made it inadvisable. Another

detractor was the configuration required event buffers per engine,

which often could require 100 MB or more of memory to prevent

substantial loss. And the lost events were often some of the more

recent events vs. older events, which was not exactly desirable.

A common solution to facilitate the collection of diagnostic and

monitoring information was the introduction of a third-party tool to

capture SQL and other low-level information. Typically, many tools

required the introduction of packet “sniffing,” or the routing of SQL

commands through an additional “monitoring” server. Most of these

tools existed as a “black box,” with the actual overhead and impact

unknown to the database administrator. In addition, each of these

solutions comes with trade-offs, various limitations, and additional

licensing fees.

Chapter 14: The MDA Tables | 375

Page 407: New.features.guide.to.Sybase.ase.15

MDA Table InstallationThe installation of the MDA tables is rather simple. Three initial

steps are necessary to operate the MDA tables. First, add a remote

server called “loopback” to your ASE installation:

exec sp_addserver 'loopback',null,@@servername

go

Note: For this text, we will concentrate on installing the remote server as

a “loopback” to the originating ASE server. It is possible to configure the

MDA tables as proxy tables on the local server. In this manner, a central-

ized MDA server can be created. Also, the name “loopback” is not

required and, in SYBASE HA configurations, it must be changed as only

one of the servers can use the “loopback” alias. However, changing the

name requires altering the installmontable script appropriately, as would

installing on a central monitoring repository.

Next, within the $SYBASE (/Sybase) directory, under the

ASE_15-0/scripts subdirectory, execute the following script to install

the MDA proxy tables:

hostname:/sybase/ASE-15_0/scripts 1> isql -Usa -SSYBASE

-iinstallmontables -oinstallmontables.out

Finally, grant the mon_role to any administrative user who will need

access to the MDA tables:

exec sp_role 'grant','mon_role','sa'

go

exec sp_role 'grant','mon_role','mda_viewer'

go

At this point, MDA table installation is complete. However, before

the MDA tables become useful, the database administrator will need

to instruct ASE to begin the collection of MDA table data.

376 | Chapter 14: The MDA Tables

Page 408: New.features.guide.to.Sybase.ase.15

MDA Table Server Configuration OptionsIn order to enable Sybase ASE to collect MDA table data, it is neces-

sary to enable several Sybase ASE configuration parameters. One of

these configuration parameters acts as the “parent” switch to start and

stop MDA data collection as a whole. Several configuration parame-

ters act as “child” switches that will enable or disable MDA data

collection at lower levels. A third set of configuration parameters

exists to set memory limitations on the amount of MDA data col-

lected into the MDA tables. Modification of the configuration values

for the MDA-related parameters will increase/decrease the amount of

memory used by Sybase ASE. The following charts differentiate the

“parent” configuration parameter along with the “child” and mem-

ory-related MDA configuration parameters.

Parent Configuration Parameter

Name Default Range

enable monitoring 0 0 - 1

Child Configuration Parameters

Name Default Range

deadlock pipe active 0 0 - 1

errorlog pipe active 0 0 - 1

object lockwait timing 0 0 - 1

per object statistics active 0 0 - 1

plan text pipe active 0 0 - 1

process wait events 0 0 - 1

sql text pipe active 0 0 - 1

statement pipe active 0 0 - 1

statement statistics active 0 0 - 1

SQL batch capture 0 0 - 1

wait event timing 0 0 - 1

Chapter 14: The MDA Tables | 377

Page 409: New.features.guide.to.Sybase.ase.15

Memory Configuration Parameters

Name Default Range

deadlock pipe max messages 0 0 - 2147483647

errorlog pipe max messages 0 0 - 2147483647

max SQL text monitored 0 0 - 2147483647

plan text pipe max messages 0 0 - 2147483647

sql text pipe max messages 0 0 - 2147483647

statement pipe max messages 0 0 - 2147483647

To list all the MDA configuration options within Sybase ASE, run

sp_configure with monitoring as an argument:

exec sp_configure monitoring

go

Group: Monitoring

(Note that the format of your output may differ from what is shown

here.)

Parameter Name Default Memory

Used

Config

Value

Run

Value

Unit Type

SQL batch capture 0 0 0 0 switch dynamic

deadlock pipe active 0 0 0 0 switch dynamic

deadlock pipe max messages 0 0 0 0 number dynamic

enable monitoring 0 0 0 0 switch dynamic

errorlog pipe active 0 0 0 0 switch dynamic

errorlog pipe max messages 0 0 0 0 number dynamic

max SQL text monitored 0 4 0 0 bytes static

object lockwait timing 0 0 0 0 switch dynamic

per object statistics active 0 0 0 0 switch dynamic

performance monitoring option 0 0 0 0 switch dynamic

plan text pipe active 0 0 0 0 switch dynamic

plan text pipe max messages 0 0 0 0 number dynamic

process wait events 0 0 0 0 switch dynamic

sql text pipe active 0 0 0 0 switch dynamic

sql text pipe max messages 0 0 0 0 number dynamic

statement pipe active 0 0 0 0 switch dynamic

statement pipe max messages 0 0 0 0 number dynamic

378 | Chapter 14: The MDA Tables

Page 410: New.features.guide.to.Sybase.ase.15

Parameter Name Default Memory

Used

Config

Value

Run

Value

Unit Type

statement statistics active 0 0 0 0 switch dynamic

wait event timing 0 0 0 0 switch dynamic

(return status = 0)

At this point in the installation process the MDA tables are still not

collecting data. The configuration parameters need to instruct Sybase

ASE to begin MDA data collection. What happens if you run SQL

against the MDA tables before the necessary configuration variables

are enabled? The bad news is that the query will fail. The good news

is ASE will instruct the database administrator, or user with the

sa_role, to specifically enable the correct configuration parameters

that are necessary to support the MDA table-based SQL statement.

Consider the following code example discussed later in this chapter:

select SQLText from monSysSQLText

where SQLText like "%Rental%"

With no MDA configuration options enabled, the following message

is returned by ASE:

Error: Number (12036) Severity (17) State (1) Server (loopback) Collection

of monitoring data for table 'monProcessStatement' requires that the

'enable monitoring', 'statement statistics active', 'per object

statistics active', 'wait event timing' configuration option(s) be

enabled. To set the necessary configuration, contact a user who has the

System Administrator (SA) role.

The Parent Switch

To enable or disable the collection of MDA table information at the

server level, utilize the sp_configure command with the "enable mon-

itoring" parameter.

sp_configure "enable monitoring", 1

go

Since monitoring is a dynamic option, a change confirmation will be

returned to indicate monitoring is enabled without the need to reboot

ASE. The use of sp_configure in this manner is similar to the manner

in which ASE auditing is either enabled or disabled globally. No

matter what “child” monitoring parameters are enabled when the

Chapter 14: The MDA Tables | 379

Page 411: New.features.guide.to.Sybase.ase.15

“parent” configuration option is changed, MDA data capture is glob-

ally disabled when "enable monitoring" is switched from 1 to 0.

The MDA TablesFor Sybase ASE 15, 36 MDA tables are provided. To list and

describe all available MDA tables, issue the following command:

select TableName, Description

from master..monTables

go

TableName Description

monTables Provides a description of all of the available monitoring tables

monTableParameters Provides a description of all of the optional parameters for each

monitoring table

monTableColumns Provides a description of all of the columns for each monitoring table

monState Provides information regarding the overall state of the ASE

monEngine Provides statistics regarding ASE engines

monDataCache Provides statistics relating to data cache usage

monProcedureCache Provides server-wide information related to cached procedures

monOpenDatabases Provides state and statistical information for databases that are currently

in use (i.e., open databases)

monSysWorkerThread Provides server-wide statistics about worker threads

monNetworkIO Provides server-wide statistics about network I/O

monErrorLog Provides the most recent error messages raised by ASE. The maximum

number of messages returned can be tuned by use of the “errorlog pipe

max messages” configuration option.

monLocks Provides information for all locks that are being held and those that have

been requested by any process for every object

monDeadLock Provides information about the most recent deadlocks that have

occurred. The maximum number of messages returned can be tuned by

use of the “deadlock pipe max messages” configuration option.

monWaitClassInfo Provides a textual description for all of the wait classes, e.g., “waiting for

a disk read to complete.” All wait events (see the monWaitEventInfo

table) have been grouped into the appropriate wait class.

monWaitEventInfo Provides a textual description for every possible situation where a

process is forced to wait for an event, e.g., “wait for buffer read to

complete”

380 | Chapter 14: The MDA Tables

Page 412: New.features.guide.to.Sybase.ase.15

TableName Description

monCachedObject Provides statistics for all objects and indexes that currently have pages

cached within a data cache

monCachePool Provides statistics for all pools allocated for all caches

monOpenObjectActivity Provides statistics for all open objects

monIOQueue Provides device I/O statistics, broken down into data and log I/O, for

normal and temporary databases on each device

monDeviceIO Provides statistical information about devices

monSysWaits Provides a server-wide view of events that processes are waiting for

monProcess Provides information about processes that are currently executing or

waiting

monProcessLookup Provides information enabling processes to be tracked to an application,

user, client machine, etc.

monProcessActivity Provides statistics about process activity

monProcessWorkerThread Provides information about process use of worker threads

monProcessNetIO Provides statistics about process network I/O activity

monProcessObject Provides statistical information about process object access

monProcessWaits Provides information about each event that a process has waited for or is

currently waiting for

monProcessStatement Provides statistics for currently executing statements

monSysStatement Provides statistics for the most recently executed statements. The

maximum number of statement statistics returned can be tuned by use of

the “statement pipe max messages” configuration option.

monProcessSQLText Provides the SQL text that is currently being executed. The maximum

size of the SQLtext returned can be tuned by use of the “max SQL text

monitored” configuration option.

monSysSQLText Provides the most recently executed SQL text. The maximum number of

messages returned can be tuned by use of the “sql text pipe max

messages” configuration option.

monCachedProcedures Provides statistics about all procedures currently stored in the procedure

cache

monProcessProcedures Provides information about procedures that are being executed

monSysPlanText Provides the most recently generated plan text. The maximum number of

messages returned can be tuned by use of the “plan text pipe max

messages” configuration option.

monOpenPartitionActivity Provides statistics for all open partitions

Chapter 14: The MDA Tables | 381

Page 413: New.features.guide.to.Sybase.ase.15

Changes from ASE 12.5.3For ASE 15, one new MDA table was added, in addition to two mod-

ifications on existing MDA tables in order to add functionality for the

monitoring of semantically partitioned tables. The following list

briefly describes the MDA tables that have been modified or are new

in ASE 15.

� monProcessObject — Updated to provide information about

each partition of an object a process is accessing rather than

reporting at the object level.

Note the partition information presented when accessing a table

with round-robin partitions:

select DBName, ObjectName, PartitionName, PartitionSize

from monProcessObject

Output:

DBName ObjectName PartitionName PartitionSize

master monProcessObject monProcessObject_604526156 2

Accounts Rental_RP Rental_RP_543990874 411588

Accounts Rental_RP Rental_RP_589245154 133814

� monCachedObject — Updated to provide information about each

partition of an object found in cache rather than reporting at the

object level.

The following query highlights current access to a partition. The

query limits the search where ProcessesAccessing is greater than

0 in order to display objects currently accessed by users:

select PartitionID, CacheName, ObjectName, PartitionName from

monCachedObject

where ProcessesAccessing > 0

Output:

PartitionID CacheName ObjectName PartitionName

589245154 default data cache Rental_RP Rental_RP_589245154

464004684 default data cache Rental_RP Rental_RP_543990874

382 | Chapter 14: The MDA Tables

Page 414: New.features.guide.to.Sybase.ase.15

� monOpenPartitionActivity — A new MDA table for ASE 15.

This table is very similar to the monOpenObjectActivity table

but at the partition level. For ASE 15 partitioned tables, this

MDA table will show the monitoring information for each parti-

tion of the object. If the object is not partitioned beyond the

single default partition (remember all tables in ASE 15 are con-

sidered partitioned) the table shows the same information as

monOpenObjectActivity.

The following query highlights a range partitioned table undergo-

ing a large insert:

select PartitionName, LogicalReads, PhysicalWrites, PagesWritten,

RowsInserted from monOpenPartitionActivity

where ObjectName = "Rental_RP"

go

Output:

PartitionName LogicalReads PhysicalWrites PagesWritten RowsInserted

Rental_RP_589245154 2584932 101559 101559 2298667

Rental_RP_605245211 8 2 2 0

Rental_RP_621245268 8 2 2 0

Note: The monOpenObjectActivity table has not changed and will pro-

vide the same information as previous releases. For partitioned objects,

the monitoring information is aggregated to provide monitoring informa-

tion for the object as a whole.

What Is Meant by “stateful” Tables?In short, a stateful table remembers what information has been

reported to a specific querying process. As an example, if a process

reads from a stateful MDA table at 12:00:00 AM, and reads the same

stateful MDA table at 12:05:00 AM, the latter read of the MDA table

will only return the new records accumulated in the 5 minutes

between 12:00:00 AM and 12:05:00 AM. Illustrating this concept,

consider the following example of the monErrorLog MDA table:

-- 12:00:00 AM Read of monErrorLog by login "mda_viewer":

select Time, ErrorMessage from monErrorLog

go

Chapter 14: The MDA Tables | 383

Page 415: New.features.guide.to.Sybase.ase.15

Output:

Time ErrorMessage

4/7/2005 11:56:19.576 PM 1 task(s) are sleeping waiting for space to

become available in the log segment for

database Accounts.

4/7/2005 11:57:19.576 PM 1 task(s) are sleeping waiting for space to

become available in the log segment for

database Accounts.

4/7/2005 11:58:19.576 PM 1 task(s) are sleeping waiting for space to

become available in the log segment for

database Accounts.

4/7/2005 11:59:19.583 PM 1 task(s) are sleeping waiting for space to

become available in the log segment for

database Accounts.

-- 12:05:00 AM Read of monErrorLog by login "mda_viewer":

select Time, ErrorMessage from monErrorLog

go

Output:

Time ErrorMessage

4/8/2005 12:00:53.213 AM Cannot read, host process disconnected:

2200 spid: 24

4/8/2005 12:01:19.563 AM 1 task(s) are sleeping waiting for space to

become available in the log segment for

database Accounts.

4/8/2005 12:02:19.563 AM 1 task(s) are sleeping waiting for space to

become available in the log segment for

database Accounts.

4/8/2005 12:03:19.573 AM 1 task(s) are sleeping waiting for space to

become available in the log segment for

database Accounts.

It is confirmed in the above example that only the data that has not

been reported since the last read is reported to the process belonging

to the mda_viewer login. But what if our requirements dictate we

capture the stateful table data and need to allow the same process(es)

to continually scan the same MDA data? The next section discusses a

framework to retain MDA table data in this manner.

384 | Chapter 14: The MDA Tables

Page 416: New.features.guide.to.Sybase.ase.15

Stateful MDA Table Data ManagementWhat can be done to retain access to the data in a stateful table? As a

suggestion, utilize one login ID to perform ad-hoc selects against a

given stateful table, and use another login to periodically extract or

harvest the records from a stateful table into a permanent table.

Below are basic steps to create permanent tables to hold the informa-

tion extracted from the stateful MDA tables.

-- Create the MDA tables in a user database, and extract all current

records.

select * into MDA_database..monErrorLog

from master..monErrorLog

-- Subsequent data extractions from MDA table.

insert MDA_database..monErrorLog

select * from master..monErrorLog

-- View MDA data from user database.

select * from MDA_database..monErrorLog

Note these stateful tables are also considered “historical” tables; after

one login queries the table, subsequent queries from the same login

do not display the data. However, the data persists. The data persists

up to the maximum value specified in the controlling memory param-

eter. In the case of the MDA table monErrorLog, this parameter

would be errorlog pipe max messages. With errorlog pipe max mes-

sages set to 1000, on the 1001st write to the monErrorLog MDA

table, new inserts will “circle back” to replace the least recently

added record in the stateful table. Think of a cache structure that acts

as a ring or a doughnut to visualize this scenario.

The complete list of stateful historical tables is:

� monDeadLock

� monErrorLog

� monSysPlanText

� monSysSQLText

� monSysStatement

To facilitate the extraction of data from the stateful historical tables,

refer to the following Unix shell script for an example. For the

reader’s convenience, the MDA configuration parameters and the

syntax to create the permanent MDA tables are embedded into the

Chapter 14: The MDA Tables | 385

Page 417: New.features.guide.to.Sybase.ase.15

comments section of this shell script. The values provided in the shell

script could act as a starting point for MDA parameters for an initial

installation on a low-volume server.

#!/bin/ksh

#set -uvx

#

#---------------------------------------------------------------------

#

# Name -- MDA_Stateful_Extract.sh

#

# Purpose -- Extract data from MDA Stateful Historical Tables.

#

# By -- Brian Taylor

#

# Date -- 04/07/2005

#

#

# Server configuration dependancies:

#

# -- Support for monErrorLog

# sp_configure "errorlog pipe active", 1

# go

# sp_configure "errorlog pipe max messages", 1000

# go

# --Support for monDeadLock

# sp_configure "deadlock pipe active", 1

# go

# sp_configure "deadlock pipe max messages", 1000

# go

# --Support for monSysStatement

# sp_configure "statement pipe active", 1

# go

# sp_configure "statement statistics active", 1

# go

# sp_configure "statement pipe max messages", 10000

# go

# --Support for monSysSQLText

# sp_configure "sql text pipe active", 1

# go

# sp_configure "sql text pipe max messages", 10000

# go

# --Support for monSysPlanText

# sp_configure "plan text pipe active", 1

# go

# sp_configure "plan text pipe max messages", 10000

386 | Chapter 14: The MDA Tables

Page 418: New.features.guide.to.Sybase.ase.15

# go

#

# MDA_Database Table creation SQL:

#

# select * into MDA_database..monErrorLog

# from master..monErrorLog

# go

# select * into MDA_database..monDeadLock

# from master..monDeadLock

# go

# select * into MDA_database..monSysStatement

# from master..monSysStatement

# go

# select * into MDA_database..monSysSQLText

# from master..monSysSQLText

# go

# select * into MDA_database..monSysPlanText

# from master..monSysPlanText

# go

#---------------------------------------------------------------------

#---------------------------------------------------------------------

# Set script parameters

#---------------------------------------------------------------------

SERVER=$1

SERVER_NO=$2

DATABASE="MDA_database"

if [ $# -lt 2 ]; then

echo "Usage: $0 <SERVER> <SERVER NO>"

exit 1

fi

LOAD_USER="mda_viewer"

RECIPIENTS="[email protected]"

RUNDATE='date "+%m%d%y_%H:%M:%S"'

#---------------------------------------------------------------------

# Get Sybase Directory

#---------------------------------------------------------------------

if [ $SERVER_NO = "1" ]

then

SYBMAIN=/sybase

else

SYBMAIN=/sybase$SERVER_NO

fi

Chapter 14: The MDA Tables | 387

Page 419: New.features.guide.to.Sybase.ase.15

SYBASE_VER='ls -1 $SYBMAIN | grep ASE- | cut -d"/" -f1'

SYBOCS_VER='ls -1 $SYBMAIN | grep OCS- | cut -d"/" -f1'

SYB_OCS=$SYBMAIN/$SYBOCS_VER

SYB_ASE=$SYBMAIN/$SYBASE_VER

SYBASE=$SYBMAIN

OUTPUT_FILE=$SYBMAIN/scripts/MDA_Stateful_extract.out

#---------------------------------------------------------------------

# Get Sybase login password for server.

# NOTE: For simplicity and demonstration, the password is hard-coded

# into this script. Please protect your password in

# accordance with your company's security policy.

#---------------------------------------------------------------------

PWD=password

$SYB_OCS/bin/isql -U${LOAD_USER} -P${PWD} -S${SERVER} <<EOF> ${OUTPUT_FILE}

set nocount on

go

use master

go

select getdate()

go

insert MDA_database..monDeadLock

select * from master..monDeadLock

go

insert MDA_database..monErrorLog

select * from master..monErrorLog

go

insert MDA_database..monSysStatement

select * from master..monSysStatement

go

insert MDA_database..monSysSQLText

select * from master..monSysSQLText

go

insert MDA_database..monSysPlanText

select * from master..monSysPlanText

go

select getdate()

go

EOF

388 | Chapter 14: The MDA Tables

Page 420: New.features.guide.to.Sybase.ase.15

/usr/bin/mailx -s "${SERVER} MDA Table Extract Complete at ${RUNDATE}"

$RECIPIENTS <${OUTPUT_FILE}

exit 0

Within the MDA_Stateful_Extract.sh script, notice the higher config-

uration values for the monSysPlanText, monSysSQLText, and

monSysStatement message counts. These three tables will collect

data much faster than the monErrorLog and monDeadLock tables.

For the three high-volume MDA configuration parameters, the values

of 10,000 should be sufficient for a low-activity development server,

where the MDA data is swept to a permanent MDA database about

every five minutes. Obviously, the configuration values in a high-

activity production environment will be significantly higher, or the

MDA table’s historical stateful information will need to be swept

from the MDA tables to permanent tables on a more frequent basis.

Note: When collecting MDA historical stateful information into perma-

nent tables, based on the high volume of the monSysSQLtext,

monSysPlanText, and monSysStatement MDA tables, plan to archive or

purge data collected from these tables on a scheduled basis, or risk the

exhaustion of free space in your permanent MDA table database.

To get an idea of how much data is collected from the stateful tables,

the following snapshot of an MDA configured server is provided.

The counts are from a two-day examination of a development server

with minimal activity. Note the high counts for the monSysSQLtext,

monSysPlanText, and monSysStatement tables in relation to the

counts associated with the monErrorLog and monDeadlock tables:

monErrorLog

7479

monDeadlock

0

monSysStatement

275062

monSysSQLText

129321

monSysPlanText

227283

Chapter 14: The MDA Tables | 389

Page 421: New.features.guide.to.Sybase.ase.15

As noted, we recommend pruning or archiving data from the MDA

history database on a frequent basis. How frequent depends on the

size of the database configured to hold the MDA history data and

how long the MDA history information will remain useful. Gen-

erally, the information in the three high-volume MDA tables

becomes obsolete with the passage of time. For this reason, a “rolling

purge” may be acceptable. A script controlled by an operating system

job scheduler, such as cron or “scheduled tasks,” can be written to

include the following in order to purge data older than three days

from the high-activity historical stateful MDA tables and older then

30 days from the remaining historical stateful MDA tables:

delete MDA_database..monErrorLog

where Time <= dateadd(dd,-30,getdate())

delete MDA_database..monDeadLock

where ResolveTime <= dateadd(dd,-30,getdate())

delete MDA_database..monSysPlanText

from MDA_database..monSysPlanText T,

MDA_database..monSysStatement S

where S.StartTime <= dateadd(dd,-3,getdate())

and S.KPID = T.KPID

go

delete MDA_database..monSysSQLText

from MDA_database..monSysSQLText T,

MDA_database..monSysStatement S

where S.StartTime <= dateadd(dd,-3,getdate())

and S.KPID = T.KPID

go

delete MDA_database..monSysStatement

where StartTime <= dateadd(dd,-3,getdate())

go

Note: Upon reboot of ASE, all MDA counters are reset to 0.

390 | Chapter 14: The MDA Tables

Page 422: New.features.guide.to.Sybase.ase.15

SQL UseA primary benefit of the MDA tables is they allow the DBA to exe-

cute SQL against the tables in order to extract server information that

is otherwise unavailable through SQL. While the MDA tables can be

accessed with SQL, avoidance of certain SQL syntax is

recommended.

Avoid subqueries and joins on the MDA tables. Use the construct

of the “permanent” MDA table database mentioned earlier in this

chapter, or copy the targeted MDA rows to tempdb as the first step of

any query against the MDA data.

The MDA tables are memory resident. A join executed between

MDA tables will result in performance degradation for the target

instance of ASE. Additionally, two references to the same table in

one query, as provided in a self join, will likely result in the compari-

son of result sets that are not identical.

Useful MDA Table QueriesThe MDA tables provide a great deal of information on Sybase ASE,

and for some database administrators, the amount of information can

be overwhelming. Here, we attempt to highlight some queries against

the MDA tables that will be of the most benefit to the database

administrator.

When was Sybase ASE last started?

select "Sybase ASE Start Date" = StartDate from monState

go

Output:

Sybase ASE Start Date

4/11/2005 2:47:19.956 PM

What SQL has accessed the Rental table?

select SQLText from monSysSQLText

where SQLText like "%Rental%"

Chapter 14: The MDA Tables | 391

Page 423: New.features.guide.to.Sybase.ase.15

Output:

SQLText

select count(*) from Rental

select user_name(TRIG.uid),TAB.name,TRIG.name,TRIG.crdate from sysobjects

TRIG,sysobjects TAB where TRIG.id = object_id('dbo.Rental_tiu') and

TRIG.type='TR' and (TRIG.id=TAB.instrig or TRIG.id=TAB.updtrig or

TRIG.id=TAB.deltrig) and ((TRIG.id = TAB.deltrig

select * from AccountManagement..Rental

select SQLText from monSysSQLText where SQLText like "%Rental%"

Is the Rental table in cache, and if so, how much cache is taken up by

the table?

select ObjectName, CacheName, CachedKB from monCachedObject

where ObjectName = "Rental"

go

Output:

ObjectName CacheName CachedKB

Code default data cache 43

What physical device is experiencing the most write activity?

-- First insert the rows into tempdb since this query involves a

subquery.

select PhysicalName, Writes into tempdb..monDeviceIO

from master..monDeviceIO

go

-- Execute the query against the tempdb copy of the MDA table.

select PhysicalName, Writes

from tempdb..monDeviceIO where Writes = (select max(Writes) from

tempdb..monDeviceIO)

go

Output:

PhysicalName Writes

/dev/vx/rdsk/mdadg/dbspace9 21575

392 | Chapter 14: The MDA Tables

Page 424: New.features.guide.to.Sybase.ase.15

MDA AlternativesAn obvious alternative to MDA tables is sp_sysmon. In fact,

sp_sysmon and MDA tables share access to some of the same coun-

ters (as indicated by monTableColumns.indicators & 2=2). However,

because of the level of aggregation and lack of details, the

sp_sysmon procedure is appropriate for monitoring performance and

metrics at the server level. Some database administrators regularly

run sp_sysmon and capture the results to a file system in order to

monitor server-level performance over a period of time. This method

of system info extraction can be taken further with the addition of

scripting logic that harvests the sysmon information from files and

pushes this information back into the database. While this has not

been officially deprecated by Sybase, with little exception, the data

that can be collected by sp_sysmon can also be collected by MDA

tables — with the notable difference in the ease of extracting the

information as well as the additional details. One exception to this is

the Replication Agent counters, which are currently still visible from

sp_sysmon with no corresponding MDA table.

In Chapter 9, we presented information on how to capture Query

Processing Metrics. While the information is not as rich as that con-

tained in the MDA tables, Query Processing Metrics contains SQL

text information, as well as the tracking of information related to

statistics time and I/O.

SummaryThe MDA tables provide low-level information DBAs can utilize to

track server, session, and database characteristics over time or in a

snapshot. In this text, we have provided a basic understanding of the

MDA tables and outlined some of the issues involved in the support

and setup of the MDA tables. Additionally, in maintaining the Sybase

ASE 15 flavor of this text, we have identified the limited MDA table

changes between ASE 15 and previous releases of ASE. Finally, we

suggest a set of alternatives to the MDA tables. Between the MDA

tables, the sp_sysmon system procedure, the QP Metrics process, and

many of the diagnostic tools available to the DBA, Sybase has pro-

vided a fairly robust and diverse set of “out-of-the-box” monitoring

capabilities.

Chapter 14: The MDA Tables | 393

Page 425: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 426: New.features.guide.to.Sybase.ase.15

Chapter 15

Java, XML, and Web Servicesin ASE

Java and XML in the database were introduced in Adaptive Server

Enterprise 12 (major changes were introduced in 12.5.1). Web Ser-

vices were introduced in Adaptive Server Enterprise in 12.5.2. The

purpose of this chapter is to provide an overview of these features,

highlight the enhancements in Adaptive Server Enterprise 15, and

identify areas to consider when using these features. This chapter is

not intended to make you an expert in Java, XML, or Web Services.

It also does not try to cover all these features in detail. For additional

information on the topics covered in this chapter, see the following

reference books from Sybase:

� Java in Adaptive Server Enterprise

� XML Services within Adaptive Server Enterprise

� Web Services User’s Guide

� Utility Guide

� System Administration Guide

Chapter 15: Java, XML, and Web Services in ASE | 395

Page 427: New.features.guide.to.Sybase.ase.15

IntroductionJava and XML in the database provide the ability to execute Java

code from within ASE. Additionally, XML can be stored within ASE

without having to break up the XML into relational tables. Sybase

has built on these features to allow Web Services to be defined within

the database. This chapter covers these three areas:

� Java in the database

� XML in the database

� Web Services

Java in the DatabaseJava in the database gives you two main advantages:

� Java classes can be used as datatypes and, thereby, datatypes can

be created to handle more complex structures than standard SQL

datatypes. By using Java classes, these complex datatypes can be

well defined and self documented.

� Java methods can be executed inside the database server. Like

standard Java, methods can be created that execute on an

instance of the Java class, or static methods can be created that

are attached to the class and not an instance of the class. In addi-

tion, Sybase provides the ability to wrap the static methods with

SQL names and invoke them as you would use standard Trans-

act-SQL stored procedures and functions.

To support Java in the database, ASE comes with its own Java virtual

machine (VM), specifically developed for handling Java processing

in the server. This Java VM comes with its own JDBC 2.0 driver for

accessing the database in Java methods running within ASE.

When JDBC classes are used within client applications, an exter-

nal JDBC driver for accessing the database is required. The built-in

JDBC driver can only be used by Java methods that are executed

within ASE. The typical client JDBC driver that is used to access

ASE is jConnect from Sybase.

396 | Chapter 15: Java, XML, and Web Services in ASE

Page 428: New.features.guide.to.Sybase.ase.15

Installing Java Classes

To install Java classes within ASE, the following tasks need to be

performed:

� Enable the server for Java using sp_configure. The database

administrator will need to shut down and restart the server for the

option to take effect. By default, ASE is not enabled for Java.

sp_configure "enable java", 1

� Use the installjava (Unix) or instjava (Windows) utility to install

the classes contained within the uncompressed JAR files into the

ASE database. Note that Java classes are installed within a spe-

cific database. If an application needs classes in multiple

databases, those classes will have to be installed into each of

those databases.

� Memory allocations for Java will have to change using sp_con-

figure. The amount of memory necessary will largely depend on

the size of the Java objects when fully instantiated (code + data

structures) as well as number of concurrent instances in use.

� The Java classes that are installed within the database can be

determined by using sp_helpjava.

For more information, see the Java in Adaptive Server Enterprise and

Utility Guide Sybase reference manuals.

Creating Java Classes and JARs

Within ASE, Sybase uses a 1.2.2 JVM. The JDK 1.2.2 can be used to

compile Java classes for use within ASE. Java objects can still be cre-

ated using later compilers without having to resort to downloading

archived Java versions. With the J2SE 1.4.2_06 and the J2SE 5.0

Update 5 versions of the Java compiler, the “-target 1.1” compiler

option has to be used to create JDK 1.1 compatible classes in order to

install them into Sybase ASE. If this option is not used or the “-target

1.2” option is used, Sybase generates a “java.lang.ClassFormatError”

exception when the class is referenced. With the J2SE 5.0 Update 5

version of the Java compiler, the compiler will also require the use of

the “-source 1.2” compiler option.

Therefore, the Java classes that can be created are limited to JDK

1.2 compatibility. When using later JDKs, the Java classes that can

be created are limited to JDK 1.1 compatibility. Also, there are some

Chapter 15: Java, XML, and Web Services in ASE | 397

Page 429: New.features.guide.to.Sybase.ase.15

classes that are not appropriate inside a database engine. Sybase does

not support these classes. This includes the GUI Java API classes and

most network classes. See the Sybase documentation for more

information.

Note: Sybase also requires the JAR files to be uncompressed in order

to be installed. To create an uncompressed JAR file, simply specify -cf0

with the Java compiler.

Using the installjava Utility

The syntax for installjava is:

installjava

-f file_name

[-new | -update]

[-j jar_name]

[ -S server_name ]

[ -U user_name ]

[ -P password ]

[ -D database_name ]

[ -I interfaces_file ]

[ -a display_charset ]

[ -J client_charset ]

[ -z language ]

[ -t timeout ]

For example, to install classes in the addr.jar file, enter:

installjava -f "/home/usera/jars/addr.jar"

The -f parameter specifies an operating system file that contains a

JAR. The complete file name for the JAR file must be used.

When installing a JAR file, ASE copies the file to a temporary

table and then installs from there. If a large JAR file is installed, the

size of tempdb may need to be expanded using the alter database

command.

Configuring Memory for Java in the Database

Use the sp_configure system procedure to change memory alloca-

tions for Java in Adaptive Server. Memory allocation can be changed

for:

� global fixed heap — Specifies memory space for internal data

structures

398 | Chapter 15: Java, XML, and Web Services in ASE

Page 430: New.features.guide.to.Sybase.ase.15

� process object heap — Specifies the total memory space avail-

able for all user connections using the Java VM

� shared class heap — Specifies the shared memory space for all

Java classes called into the Java VM

Use the following command to see all of the current settings related

to Java in the database:

sp_configure "Java Services"

See “Java Services” in Sybase’s System Administration Guide for

complete information about these configuration parameters.

Java Classes as Datatypes

When using Java classes as datatypes within Transact-SQL, those

classes must be defined as public and implement java.io.Serializable.

Java classes can then be used as the datatype for SQL columns,

Transact-SQL variables, and Transact-SQL stored procedure parame-

ters. Also, the default values for SQL columns can be a Java class.

Note: The Sybase documentation mentions java.io.Externalizable, but

this is a more difficult interface to implement than java.io.Serializable

since it requires the Java class to serialize its own fields. The

java.io.Serializable interface indicates that the class can be serialized

with the standard Java serialization methods.

Following Java best practices, Java class fields should be declared as

private. If those fields need to be accessed, the class should provide

the necessary get and set methods. Package names should also be

used. The examples within the Sybase reference manuals do not

always follow these best practices. However, a common bad practice

in Java programming that will cause significant performance and

address contention issues in Sybase ASE is the creation of separate

set/get methods for components that are referenced together. For

example, setStreet(), setCity(), setState(), and setZipCode() are

grossly inefficient compared to a single constructor style method

setAddress(), which has street, city, state, and zipcode as parameters.

Static variables within classes intended to be stored as data ele-

ments should not be used since Sybase states that the scope for these

static variables can be unreliable as the scope of static variables

(within the Sybase JVM) is implementation dependent.

Chapter 15: Java, XML, and Web Services in ASE | 399

Page 431: New.features.guide.to.Sybase.ase.15

An Example of Table Definition Using a Java Class

This example illustrates using the Java class com.sybase.Address as

the datatype for a SQL column and defining the default value for that

column using the class constructor that requires only a string as input.

create table javaTest (AddressID int,

address com.sybase.Address default new com.sybase.Address("") in row)

Performance Considerations

The above example defines the Java class column as being in-row.

An in-row object can occupy up to approximately 16 KB, depending

on the page size of the database server and other variables. This

includes its entire serialization, not just the values in its fields. If the

server cannot fit the object within the database row, an exception is

generated. The default value is off-row. An off-row object is stored in

the same manner as text and image data items and therefore will be

stored on separate data pages from the row itself. Therefore, for

better performance, it is recommended to have in-row Java objects,

but ensure the objects will fit within 16 KB. Use the following with a

Java class object to help you make this decision:

select datalength (new class_name(...))

An Example of Using a Java Class within a Select

Sybase requires the use of “>>” instead of the normal “.” dot notation

in order to reference the method on a class. For example:

select address>>getCity() from javaTest

Note that if you plan on using Java method calls within the where

clause of queries, a huge performance gain could be realized by cre-

ating a function-based index using the Java method call.

Executing Java Methods

Following Java best practices, the data within a Java class datatype

should never be accessed directly. If those fields need to be accessed

without any additional logic performed on those fields, simple get

and set methods should be created. If additional logic needs to be per-

formed, then having a datatype as a Java class gives you the ability to

control and provide business logic around the datatype.

400 | Chapter 15: Java, XML, and Web Services in ASE

Page 432: New.features.guide.to.Sybase.ase.15

If you find that you need a procedure or function that cannot be

built using Transact-SQL, an alternative is to use the Sybase features

to create a procedure or function from a Java method. In order to

accomplish this, the following must be done:

� Define the method as public and static.

� Use the create procedure and create function statements to

define a SQL name for the method.

Class Static Variables

In the Sybase manual, Sybase recommends not to use any static vari-

ables within the Java classes used within ASE. The scope for these

static variables can be unreliable as the scope of static variables

(within the Sybase JVM) is implementation dependent.

If state information is needed to be saved within the class, an

alternative would be to incorporate an ASE table via the internal

JDBC driver.

Another alternative would be to use static variables with the

understanding that Sybase might change the implementation scope

for these variables in the future. With ASE 15, Sybase allocates an

internal JVM for every client session or spid. Therefore, any static

data structures set or modified within a Java class remain for the

duration of the client session. This feature can be exploited to create

variables that persist across SQL batches.

Recommendations and Considerations

Only create Java classes in ASE when the use of SQL alone is not

sufficient for datatypes, procedures, or functions. Java does give you

a more powerful and flexible language in order to create more com-

plex datatypes, procedures, and functions. Also, these Java classes

will need to be JDK 1.2 compatible.

Chapter 15: Java, XML, and Web Services in ASE | 401

Page 433: New.features.guide.to.Sybase.ase.15

XML in the DatabaseXML in the database provides the following capabilities:

� XML can be stored in the database. The complete XML docu-

ment or selected nodes or values can then be retrieved from

within the XML document retrieved using the XPath expression

language.

� Any SQL result set can be converted to return an XML

document.

� An external directory of XML documents can be mapped into a

logical table within ASE.

To support XML in the database, a native XML processor comes

with ASE starting with version 12.5.1. There is also a Java-based

XQL (XML) processor that comes with ASE starting with version

12.5. As mentioned in the Sybase XML Services manual, this XQL

processor is a preliminary implementation of the XPath-based XML

query facilities. Its capabilities are superseded by those of the native

XML processor. This author has used both, and Sybase has improved

the processing of XML with the native XML processor. There is no

reason to use the old Java-based XQL (XML) processor.

XML Stored in the Database

XML can be stored in any database or in the file system. When con-

sidering whether to use a database for XML capabilities or keep the

XML in the file system, three areas need to be considered:

� How does the database handle large character strings? The main

reason to use XML is to manage chunks of complex and unstruc-

tured data. These chunks could be considered large character

strings.

� What ability does the database server have to manage XML

specifically?

� What ability does the database server have to query XML?

The main options to consider when storing XML data into ASE are

the following:

� Store the XML document into a text datatype.

� Store the XML document into an image datatype using the

xmlparse function.

402 | Chapter 15: Java, XML, and Web Services in ASE

Page 434: New.features.guide.to.Sybase.ase.15

� Store the XML document into an image datatype using an exter-

nal compression utility like pkzip.

� Access XML documents through Sybase ASE rather than storing

them in the database.

The following describes these options to allow you to determine

which option should be used. An explanation on how to store HTML

is also given, followed by overall recommendations and

considerations.

For all options, see the section called “Performance and Sizing”

for more detailed information.

Option 1: Store the XML Document into a Text Datatype

This option stores the XML document into a text datatype without

any additional processing. It uses the same logic normally used to

store data into a text datatype. The storage requirement requires little

more than the actual raw size of the actual XML document. (A 1 MB

XML document requires a little over 1 MB to store in the database.)

The time required for loading and retrieving these documents is

mainly dependent on the network connection speed between the cli-

ent and the database. The administrator needs to configure the

procedure cache to hold the largest document that will be inserted

into the database.

Users can still access an element, attribute, or sub-document

within these documents using XPath, but the access time grows expo-

nentially as the size of the documents increases. If users only need

access to the complete XML documents, then XPath would be

unnecessary. In this case, turn off the "enable xml" option and access

the XML documents with standard SQL.

If there are only a few elements that the users need access to

within the XML document, consider replicating the data into separate

table columns to speed up read access. With ASE 15, computed col-

umns could accomplish the same thing whenever a new XML

document is inserted into the database. The following table definition

extracts the geographyID element value contained within the XML

being inserted and stores it within the ID column.

create table xml (xml text,

ID int compute xmlextract('//ns0:geographyID/text()', xml)

materialized)

Chapter 15: Java, XML, and Web Services in ASE | 403

Page 435: New.features.guide.to.Sybase.ase.15

Option 2: Store the XML Document into an Image Datatype

Using xmlparse

This option requires users to temporarily store the XML document

into a text datatype using the same procedure as option 1. As in

option 1, the time required for loading the documents to this tempo-

rary location is dependent on the network connection speed.

Reconfiguring the procedure cache may be required.

Then, the xmlparse function is used against the temporary text

datatype to create the indexed XML document that is then stored in

an image datatype. The Sybase XML Services manual describes this

indexed XML document as a parsed form of the XML document.

The adjective “indexed” better describes the output of the xmlparse

function in that it does add internal indexes on all the elements within

the XML document. The xmlparse function requires the "heap mem-

ory per user" parameter to be configured for executing against the

largest XML document expected. This memory requirement and the

time needed to create the indexed XML document increases expo-

nentially with the size of the XML document.

The database storage requirements for this option more than dou-

ble the storage requirements for storing the XML document as plain

text.

Especially for large XML documents, the advantage of creating

an indexed XML document is that it will allow you to quickly access

an element, attribute, or sub-document within the document using

XPath. Using XPath, the original XML document can also be

retrieved quickly.

Option 3: Store the XML Document into an Image Datatype

Using Compression

This option does not require ASE’s XML Services. The database

administrator can turn off the "enable xml" option.

This option does not allow users to access an element, attribute,

or sub-document within the document using XPath.

Additionally, users are required to compress the XML document

outside the database and store the XML document into an image

datatype. Since parts of the XML document cannot be accessed, an

additional column on the table that contains the XML document will

be needed to identify the XML document.

404 | Chapter 15: Java, XML, and Web Services in ASE

Page 436: New.features.guide.to.Sybase.ase.15

The main benefits with this option are the storage requirement

within the database is much smaller, the speed of storing and retriev-

ing the XML documents is much faster, and the memory requirement

for storing the XML documents (i.e., "procedure cache size") is

much less. These benefits are obtained mainly because XML data is

highly compressible (typically greater than 90%) since, by its nature,

it contains repeatable text. Any compression utility will work. This

author used the zip classes within Java to create zip files as follows.

These zip files were then streamed into the image datatypes.

FileOutputStream fileOutputStream = new FileOutputStream(zipFilename +

".zip");

ZipOutputStream zipOutputStream = new ZipOutputStream(fileOutputStream);

ZipEntry zipEntry = new ZipEntry(filename + ".txt");

zipOutputStream.putNextEntry(zipEntry);

xmlString.writeTo(zipOutputStream);

zipOutputStream.closeEntry();

zipOutputStream.close();

Option 4: Store the XML Document Outside the Database

This option stores the XML document into a file system directory

and maps the XML document to a logical proxy table. A view can

then be created to eliminate the unnecessary columns from the proxy

table and to add additional columns based on the XML document file

name and/or data within the XML document.

One disadvantage with this option is that users cannot simulta-

neously insert additional columns for data not contained within the

XML document. All the other options would allow descriptive data

to be added within the same row as the XML document. This data

could then be used to query for the XML document. Within this

option, there is no physical table within the database containing the

XML document and, therefore, descriptive data would have to be

contained within its own table and related to the XML document via

some ID common to both. The appearance of the XML document

through the proxy table and the inserting of this descriptive data can-

not be wrapped within a database transaction.

The following is an example of creating a proxy table and view:

create proxy_table _xmlDirectory external directory at

"/remotebu/disk_backups/xmlDirectory"

create view xmlDirectory

(ID, code, xml) as

Chapter 15: Java, XML, and Web Services in ASE | 405

Page 437: New.features.guide.to.Sybase.ase.15

select convert(integer, substring(filename, 6, 3)),

convert(varchar(10), xmlextract('//ns0:geographyID/text()',

content)), content

from _xmlDirectory

The proxy table that is created maps the XML document (the field

content above) into a logical image datatype. Except for the column

being an image datatype, users can access an element, attribute, or

sub-document within these documents using XPath the same way

they would if the XML document was stored in a text datatype. The

performance considerations are the same as if users were accessing

an XML document stored within a text datatype. There is no addi-

tional procedure cache requirement for inserting the document since

database insertion does not take place.

The "enable file access" and "enable cis" options have to be set

for this option. If the xmlextract function is used, the "enable xml"

option also needs to be set.

HTML Stored in the Database

In the XML Services within Adaptive Server Enterprise manual,

Sybase states HTML is not well suited for extracting data. Even

though HTML does not describe the data very well, HTML can be

stored and parsed using XPath. The XHTML standard, which makes

HTML well-formed XML, needs to be followed. For example, all

tags have to have ending tags in the XHTML standard, which isn’t a

requirement for HTML.

Recommendations and Considerations

The option used for storing XML and accessing XML within the

database is dependent upon how applications would use the XML

documents. If applications do not need access to part of an XML doc-

ument, then do not index the document with xmlparse. If the

documents are small and applications have a relatively large number

of documents, store them as text, but add additional indexed table

columns to access the XML documents.

For caching of typical requests and responses from a document

literal web service, the database administrator might index the

requests so an application can determine if the request has already

been processed. Usually requests are relatively small. However, since

406 | Chapter 15: Java, XML, and Web Services in ASE

Page 438: New.features.guide.to.Sybase.ase.15

responses are relatively large and usually processed as one big chunk

of data, you should compress the responses.

In other words, evaluate the advantages and disadvantages for

each option and determine which option or combination of options

fits your application.

Performance and Sizing

For the “XML in the Database” research, a Java test suite (using

JUnits) was created to test performance and sizing. These tests were

executed with a number of iterations using the following configura-

tion options. These performance and sizing numbers were taken from

one execution of this test suite. These results were consistent with the

other test results.

sp_configure "enable java", 0

All Java Services have been turned off. None of the “XML in the

Database” features are dependent on the Java Services.

sp_configure "enable xml", 1

Enables the use of all the XML query functions (xmlextract,

xmltest, xmlparse, and xmlrepresentation) as defined in the XML

Services manual.

sp_configure "enable file access", 1

sp_configure "enable cis", 1

These two configuration options enable the use of the create

proxy_table command in order to access XML files within a file

system directory.

sp_configure "procedure cache size", 30000

This configuration option was originally set this high in order to

insert a 28 MB XML file into the database. All data in an insert

statement (including the stream XML) is stored in procedure

cache before the data is physically inserted into the database

table.

sp_configure "heap memory per user", 40000000

This is needed by the XML functions. Specifically, the xmlparse

function uses heap memory to create the parsed image XML

from the text XML. This amount of heap memory was needed to

parse a 7 MB text file.

Chapter 15: Java, XML, and Web Services in ASE | 407

Page 439: New.features.guide.to.Sybase.ase.15

Description of the Tests

To support the testing iterations, 210 files were created for each size

of file. For the small files, there were some slight differences depend-

ing on the data within the XML. Real data was used, based on 210

unique IDs to create these files. To create the larger files, the data

from the 210 small files were combined in various ways. Any code

retrieval logic against the larger files used the 210 unique IDs in a

manner to access different parts of the XML document.

The “Storage Use” was determined by executing the

sp_spaceused system procedure against the table to get the reserved

amount. For the Unix directories, the du -sk command was used.

For the “Text Table” and “Zip Table,” “Insert Times” included

the time to read the file into the Java program and insert the file into

the database table. For the “Image Table,” this time included the time

it took to execute xmlparse against the “Text Table” to create the

image datatype into the “Image Table.”

The “Retrieval Times” is the time it took to read the XML docu-

ment from the table and write it out to a file. For the “Image Table,”

the text version of the XML document was retrieved (no index

information).

The “Code Retrieval” time included only the time it took to

retrieve one integer ID from each XML document.

Performance and Sizing Numbers

XML File Size (bytes) Small Medium Big Very Big

Average file size 3818 704,110 3,518,678 7,036,888

Smallest file size 1,142 704,110 3,518,678 7,036,888

Largest file size 10,421 704,110 3,518,678 7,036,888

Storage Use (KB) Small Medium Big Very Big

Text table size 1,120 164,670 821,132 1,642,246

Image table size 3,916 361,654 1,799,342 3,596,956

Zip table size 444 8,432 39,508 78,986

Directory size 890 146,166 724,085 1,444,806

Total Insert Times Small Medium Big Very Big

Text table 5.65 seconds 1.62 minutes 8.26 minutes 17.6 minutes

Image table 1.95 minutes 23.8 minutes 4.56 hours 19.8 hours

Zip table 6.67 seconds 11.5 seconds 40.3 seconds 79.8 seconds

Directory table N/A N/A N/A N/A

408 | Chapter 15: Java, XML, and Web Services in ASE

Page 440: New.features.guide.to.Sybase.ase.15

Average Insert Times Small Medium Big Very Big

Text table 0.03 seconds 0.46 seconds 2.36 seconds 5.04 seconds

Image table 0.55 seconds 6.79 seconds 78.1 seconds 5.66 minutes

Zip table 0.03 seconds 0.05 seconds 0.19 seconds 0.38 seconds

Directory table N/A N/A N/A N/A

Total Retrieval Times Small Medium Big Very Big

Text 3.23 seconds 34.8 seconds 2.37 minutes 4.44 minutes

Image table 13.6 seconds 3.90 minutes 18.7 minutes 37.0 minutes

Zip table 2.24 seconds 7.20 seconds 7.97 seconds 14.3 seconds

Directory table 5.90 seconds 39.8 seconds 2.54 minutes 3.87 minutes

Average Retrieval Times Small Medium Big Very Big

Text table 0.01 seconds 0.16 seconds 0.68 seconds 1.17 seconds

Image table 0.06 seconds 1.11 seconds 5.34 seconds 10.6 seconds

Zip table 0.01 seconds 0.03 seconds 0.04 seconds 0.07 seconds

Directory table 0.03 seconds 0.19 seconds 0.73 seconds 1.10 seconds

Total Code Retrieval Times Small Medium Big Very Big

Text table 14.6 seconds 21.4 minutes 4.17 hours 19.3 hours

Image table 10.6 seconds 21.8 seconds 60.0 seconds 1.73 minutes

Zip table N/A N/A N/A N/A

Directory table 16.0 seconds 20.3 minutes 4.39 hours 19.2 hours

Average Code Retrieval

Times

Small Medium Big Very Big

Text table 0.07 seconds 6.10 seconds 71.6 seconds 5.52 minutes

Image table 0.05 seconds 0.10 seconds 0.28 seconds 0.49 seconds

Zip table N/A N/A N/A N/A

Directory table 0.08 seconds 5.81 seconds 75.3 seconds 5.50 minutes

Conclusions

Some key conclusions from this test data that is referred to by the

various XML options are:

� Storing text XML documents in the database takes only a little

more storage than storing these same XML documents into a file

system.

� Indexing (parsing) the XML documents more than doubles the

database storage requirements.

� Compressing the XML documents before storing greatly

decreases the database storage requirements.

Chapter 15: Java, XML, and Web Services in ASE | 409

Page 441: New.features.guide.to.Sybase.ase.15

� Inserting into text and image datatypes directly relates to how

large the XML document is that you are inserting.

� The memory requirement and the time needed to create the

indexed (parsed) XML document increases exponentially with

the size of the XML document.

� Indexing the XML documents greatly speeds up the retrieval of

elements within large XML documents.

� There is almost no difference in performance in storing text

XML documents in the database compared to storing those docu-

ments in a file system and accessing them through a proxy table.

SQL Result Sets Converted to Return an XML

Document

Adaptive Server Enterprise allows you to convert a SQL result set to

an XML result set (XML document) using the "for xml" clause. This

will not create a complex business XML document and you cannot

specify the structure of the XML document. Adaptive Server Enter-

prise, however, will convert result sets to XML, as illustrated below.

There is a resultset root element that contains row elements. Each

row element corresponds to a row in the original SQL result set. Each

row element contains elements that have the same name as the col-

umn names in the original result set. These elements contain the data

corresponding to that column and row in the original result set. For

more information, see the “XML Mapping Functions” chapter in the

Sybase XML Services manual.

<resultset xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

<row>

<column 1 name>row 1 data for column 1</column 1 name>

<column 2 name>row 1 data for column 2</column 2 name>

<column 3 name>row 1 data for column 3</column 3 name>

</row>

<row>

<column 1 name>row 2 data for column 1</column 1 name>

<column 2 name>row 2 data for column 2</column 2 name>

<column 3 name>row 2 data for column 3</column 3 name>

</row>

</resultset>

410 | Chapter 15: Java, XML, and Web Services in ASE

Page 442: New.features.guide.to.Sybase.ase.15

Web ServicesWeb Services in ASE give you the ability to execute SQL and stored

procedures with a web service interface. You also have the ability to

access external web services from within ASE. You do not, however,

have the ability to create web services. The following gives an over-

view. For more information, see the Web Services User’s Guide.

Web Services Producer

The Web Services Producer component gives clients a web services

interface in order to execute SQL and stored procedures within ASE.

In order to use this interface, the Web Services Producer needs to be

started as described in the Web Services User’s Guide and the follow-

ing configuration parameter needs to be set:

sp_configure "enable webservices", 1

The Web Services Producer interface provides the following

methods:

� execute — Executes a SQL statement or stored procedure

� login — Establishes a persistent connection to Adaptive Server

Enterprise

� logout — Explicitly terminates an Adaptive Server Enterprise

connection

The login and logout methods are only needed to initiate a persistent

connection. The execute method contains the following parameters:

execute aseServerName username password sqlxOptions sql

For more information on these parameters, see the Web Services

User’s Guide. Also, see the XML Services manual for more informa-

tion on the SQLX options.

The following is an example of a Web Services request that uses

the Web Services Producer interface for the execute method. This is

a simple example that selects two fields (ID field defined as an int

and data field defined as a char(10)).

<SOAP-ENV:Envelope

xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"

xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"

Chapter 15: Java, XML, and Web Services in ASE | 411

Page 443: New.features.guide.to.Sybase.ase.15

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

xmlns:xsd="http://www.w3.org/2001/XMLSchema">

<SOAP-ENV:Body>

<m:execute xmlns:m="urn:genwsdl.ws.ase.sybase.com"

SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">

<service xsi:type="xsd:string">server_name</service>

<userName xsi:type="xsd:string">user_name</userName>

<password xsi:type="xsd:string">password</password>

<sqlxOptions xsi:type="xsd:string">format=no,root=no</sqlxOptions>

<sql xsi:type="xsd:string">select ID, data from db..testTable</sql>

</m:execute>

</SOAP-ENV:Body>

</SOAP-ENV:Envelope>

The following is the resulting output response with the ID field hav-

ing the row number as its value and the data field having the string

value of “row x” where x corresponds to the row number:

<?xml version="1.0" encoding="UTF-8"?>

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"

xmlns:xsd="http://www.w3.org/2001/XMLSchema"

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

<soapenv:Header>

<ns1:sessionID soapenv:actor="" soapenv:mustUnderstand="0"

xsi:type="xsd:long" xmlns:ns1="http://xml.apache.org/axis/session">

7987313730286340973</ns1:sessionID>

</soapenv:Header>

<soapenv:Body>

<ns2:executeResponse soapenv:encodingStyle="http://

schemas.xmlsoap.org/soap/encoding/"

xmlns:ns2="urn:genwsdl.ws.ase.sybase.com">

<executeReturn xsi:type="soapenc:Array"

soapenc:arrayType="ns3:DataReturn[1]"

xmlns:ns3="http://producer.ws.ase.sybase.com"

xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/">

<item href="#id0"/>

</executeReturn>

</ns2:executeResponse>

<multiRef id="id0" soapenc:root="0" soapenv:encodingStyle="http://

schemas.xmlsoap.org/soap/encoding/" xsi:type="ns4:DataReturn"

xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"

xmlns:ns4="http://producer.ws.ase.sybase.com">

<XML xsi:type="xsd:string">&lt;row&gt;&lt;ID&gt;1&lt;

/ID&gt;&lt;data&gt;row 1&lt;/data&gt;&lt;/row&gt;&lt;row&gt;

&lt;ID&gt;2&lt;/ID&gt;&lt;data&gt;row 2&lt;/data&gt;&lt;

/row&gt;&lt;row&gt;&lt;ID&gt;3&lt;/ID&gt;&lt;data&gt;row 3

412 | Chapter 15: Java, XML, and Web Services in ASE

Page 444: New.features.guide.to.Sybase.ase.15

&lt;/data&gt;&lt;/row&gt;</XML>

<updateCount xsi:type="xsd:int">3</updateCount>

<DTD xsi:type="xsd:string">&lt;!-- Cannot generate a DTD for the

rootless XML forest specified by the &apos;root=no&apos; option

--&gt; </DTD>

<schema xsi:type="xsd:string">&lt;xsd:schema

xmlns:xsd=&quot;http://www.w3.org/2001/XMLSchema&quot;&gt;&lt;

xsd:complexType name=&quot;RowType.resultset&quot;&gt;&lt;

xsd:sequence&gt;&lt;xsd:element name=&quot;ID&quot;type=&quot;

INTEGER&quot;/&gt;&lt;xsd:element name=&quot;data&quot;

type=&quot;CHAR_10&quot;/&gt;&lt;/xsd:sequence&gt;&lt;/

xsd:complexType&gt;&lt;xsd:complexType name=&quot;

TableType.resultset&quot;&gt;&lt;xsd:sequence&gt;&lt;

xsd:element name=&quot;row&quot; type=&quot;

RowType.resultset&quot;minOccurs=&quot;0&quot;

maxOccurs=&quot;unbounded&quot;/&gt;

&lt;/xsd:sequence&gt;&lt;/xsd:complexType&gt;&lt;xsd:simpleType

name=&quot;INTEGER&quot;&gt;&lt;xsd:restriction

base=&quot;xsd:integer&quot;&gt;&lt;xsd:maxInclusive

value=&quot;2147483647&quot;/&gt;&lt;xsd:minInclusive

value=&quot;-2147483648&quot;/&gt;&lt;/xsd:restriction&gt;

&lt;/xsd:simpleType&gt;&lt;xsd:simpleType name=&quot;

CHAR_10&quot;&gt;&lt;xsd:restriction base=&quot;xsd:string&quot;

&gt;&lt;xsd:length value=&quot;10&quot;/&gt;&lt;

/xsd:restriction&gt;&lt;/xsd:simpleType&gt;&lt;/xsd:schema&gt;

</schema>

</multiRef>

</soapenv:Body>

</soapenv:Envelope>

Web Services Consumer

The Web Services Consumer component enables ASE to access the

web services of other applications. These external web services are

mapped to an ASE proxy table at run time. In order to use the exter-

nal web services, the Web Services Consumer needs to be started as

described in the Web Services User’s Guide and the following con-

figuration parameter needs to be set:

sp_configure "enable webservices", 1

The Web Services Consumer server needs to be known by ASE. Add

this server by executing the following, where 'ws' is the Web Services

Chapter 15: Java, XML, and Web Services in ASE | 413

Page 445: New.features.guide.to.Sybase.ase.15

Consumer server name and 'hostname:port' is where the Web Ser-

vices Consumer service is listening:

sp_addserver 'ws', sds, 'hostname:port'

Now access can be added to any external web services. In order to

establish access, follow the example below. 'URI' is the http location

of the WSDL (Web Services Description Language) and 'ws' is the

Web Services Consumer server name as defined above. Note that the

sp_webservices system procedure will need to be added to your

installation using the installws script.

sp_webservices 'add', 'URI’, 'ws', [‘operation_name=proxy_table

[, operation_name=proxy_table]* ’ ]

This will take every operation name found within the WSDL and cre-

ate a proxy table to represent the inputs and outputs of the operation.

The proxy table name will be the same as the operation name (trun-

cated to 28 characters) unless the proxy table name for an operation

is overridden in the sp_webservices system procedure. Therefore, if

there are five operations with the WSDL, five proxy tables will be

created corresponding to those operations. For RPC/encoded opera-

tions, the proxy table will contain a column for each input and output

parameter where each input column name starts with an underscore

“_”. For document literal operations, the proxy table will contain two

columns: _inxml and outxml.

To invoke the web service operation, execute a select statement

against the proxy table supplying a value for all the input parame-

ter/columns. For example, the following executes a document literal

operation:

select outxml from getReportingInformation

where _inxml =

'<m:getReportingInformationRequest

xmlns:m="local:company:business:standard:reference">

<m:reportingID>

<m:rowID>501</m:rowID>

</m:reportingID>

<m:effectiveDate>2005-02-03</m:effectiveDate>

</m:getReportingInformationRequest>'

414 | Chapter 15: Java, XML, and Web Services in ASE

Page 446: New.features.guide.to.Sybase.ase.15

Recommendations and Considerations

Accessing the database via a web services interface can be very use-

ful. The Web Services Producer gives a simple interface for this

ability. This can be used by a number of products whose main com-

munication protocol is via Web Services.

Accessing external web services from within ASE can also be

very useful; however, creating the XML inputs to the web services

could be complicated depending on the web service.

These web services abilities give designers additional options

when developing applications using Sybase ASE.

Chapter 15: Java, XML, and Web Services in ASE | 415

Page 447: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 448: New.features.guide.to.Sybase.ase.15

Appendix A

Sybase ASE 15 CertificationSample Questions andAnswers

1. What arguments are required on a disk init?

A. name, physname, vdevno

B. name, logical_name, vstart, size

C. name, physname, size

D. name, size, dsync

2. What are the allowable logical page sizes in Sybase ASE 15?

A. 2 K, 4 K, 6 K, 8 K

B. 2 K, 4 K, 8 K, 16 K

C. 2 K only

D. 2 K, 4 K, 8 K, 16 K, 32 K

3. What is true for max memory? (choose all that apply)

A. max memory >= total logical memory

B. total logical memory >= max memory

C. max memory + total logical memory = memory cache

D. If you set allocate max shared memory to 1, Adaptive Server

allocates all the memory specified by max memory at

startup.

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 417

Page 449: New.features.guide.to.Sybase.ase.15

4. Which role or roles are necessary to execute queries against any

of the MDA tables?

A. sa_role

B. mon_role

C. Sybase_ts_role

D. Two of the above

E. All of the above

5. For scrollable cursors, which of the following extensions is not a

valid extension to the fetch command?

A. first

B. last

C. absolute

D. previous

E. relative

6. For sensitive cursors, which of the following statements is false?

A. The default sensitivity for a cursor is insensitive.

B. A sensitive cursor can be declared as a scrollable cursor.

C. An independent data change is visible to a sensitive cursor

only when a change to the base table causes a row in the cur-

sor result set to be inserted or deleted.

D. A scrollable cursor is read only in ASE 15.

7. What characteristic about the QP Metrics capture process is

false?

A. The QP Metrics data can be used to identify the query with

the most physical I/O.

B. Enabled at the server level with the sp_configure parameter

C. Captures server-wide data to a centralized table in the master

database from all user databases

D. Can be used to identify the most frequently executed query

in a database

E. None of the above

8. Given the following code scenario:

set metrics_capture on

go

use tempdb

418 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 450: New.features.guide.to.Sybase.ase.15

go

select au_id from pubs2..authors where au_id = "427-17-2319"

go

In which database will the QP Metrics information be captured

for the above query?

A. tempdb

B. master

C. pubs2

D. sybsystemdb

E. It depends in which database the most I/O was generated by

the query.

9. A cursor declared as semi-sensitive and scrollable will contain all

possible rows in the cursor’s worktable after:

A. Every possible row in the cursor set has been fetched

B. The cursor is opened

C. The first row is fetched

D. The last cursor row is fetched with the fetch last <cur-

sor_name> command

E. One of the above

F. Two of the above

10. What is the term limitation for the members of an “in” statement

in a query for ASE 15?

A. Unlimited

B. 256

C. Limited only by the size of procedure cache

D. 65536

E. 210

11. Pick the false statement: Scrollable cursors can be used to:

A. Access the same data row an infinite number of times

B. Update the same data row an infinite number of times

C. Access the nth row from current cursor position

D. Jump to the last row in a cursor result set

E. All of the above are true

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 419

Page 451: New.features.guide.to.Sybase.ase.15

12. Which system command will provide command-line information

similar to the graphical plan displayed by ASE 15’s plan viewer?

A. set showplan on

B. set viewplan on

C. set statistics plancost on

D. set statistics io on

E. set option show_best_plan normal

13. Which one or more partition type(s) is/are not valid semantic par-

tition types in ASE 15?

A. Hash

B. Range

C. List

D. Group

E. Round-robin

14. The datachange function is used for which of the following?

A. Identify rows changed within a table

B. Report the percentage of data changed at the partition level

on one column

C. Report the percentage of data changed on any column

D. Two of the above

E. One of the above

F. All of the above

15. What maintenance operation cannot be performed at the partition

level in ASE 15?

A. Truncate partition

B. Partition-level reorg

C. Partition-level bcp

D. Drop partition

E. None of the above

16. What dbcc command will provide a list of the available tempo-

rary databases?

A. dbcc prtavailabletempdbs

B. dbcc pravailabletempdbs

C. dbcc availabletempdbs

420 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 452: New.features.guide.to.Sybase.ase.15

D. dbcc lstavailabletempdbs

E. None of the above

17. When creating Sybase devices for temporary databases, which of

the following are true?

A. The devices can be created on raw partitions.

B. The devices can be created as file system devices.

C. The dsync option of disk init should be set to false.

D. Two of the above

E. One of the above

F. All of the above

18. Given the following sequence of commands:

use master

go

sp_dbrecovery_order database_1, 1

go

sp_dbrecovery_order database_2, 2

go

sp_dbrecovery_order database_1, -1

go

sp_dbrecovery_order database_2, -1

go

sp_dbrecovery_order database_1, 3

go

sp_dbrecovery_order database_3, 1

go

and assuming the databases have the following database IDs:

Database Database ID

database_1 5

database_2 6

database_3 7

What is the order of the above user databases during database

recovery when ASE is restarting?

A. database_1, database_2, database_3

B. database_1, database_3, database_2

C. database_3, database_2, database_1

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 421

Page 453: New.features.guide.to.Sybase.ase.15

D. database_3, database_1, database_2

E. None of the above

19. For temporary databases, which of the following is true?

1) Applications can be assigned to specific temporary

databases.

2) Applications can only be assigned to one temporary

database.

3) Only the “sa” ID can be assigned to a specific temporary

database.

4) Sybase logins can be assigned to a specific temporary

database.

A. Three of the above

B. Two of the above

C. One of the above

D. All of the above

E. None of the above

20. For temporary tables in tempdb, how many characters of the 255

available for table names are used to non-uniquely identify the

temporary name?

A. 237

B. 238

C. 239

D. 219

21. Which of the following commands create a worktable in tempdb?

1) update statistics

2) create sensitive cursor

3) create semi_sensitive cursor

4) group by

A. 1, 2, 4

B. 1, 3, 4

C. 2, 3, 4

D. 1, 2, 3

E. All of the above

422 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 454: New.features.guide.to.Sybase.ase.15

22. On which segment are the QP Metrics stored?

A. default

B. system

C. logsegment

D. user defined

E. Two of the above

23. What can you not do with Java in the database?

A. Access tables within the Java class using the built-in JDBC

driver

B. Create a user-defined function (UDF) from a Java class that

can be use anywhere that you use a built-in SQL function

C. Load the Java source and compile the source within the

database

D. Use the methods and fields within a Java class within a SQL

select statement

24. What sp_configure options need to be turned on to use the XML

query functions (xmlextract, xmltest, xmlparse, and

xmlrepresentation)?

A. "enable java"

B. "enable xml"

C. "enable cis"

D. "enable java" and "enable xml"

25. What sp_configure options need to be turned on to access XML

store on a file system using the "create proxy_table" command?

A. "enable xml"

B. "enable file access"

C. "enable cis" and "enable file access"

D. "enable xml" and "enable file access"

26. What memory is used when inserting XML into a table?

A. "heap memory per user"

B. "size of global fixed heap"

C. "size of process object heap"

D. "procedure cache size"

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 423

Page 455: New.features.guide.to.Sybase.ase.15

27. What memory is used to create an indexed or “parsed” XML

document using the XML query function xmlparse?

A. "heap memory per user"

B. "size of global fixed heap"

C. "size of process object heap"

D. "procedure cache size"

28. Which of the following are valid optimization goals?

A. allrows_oltp, allrows_mix

B. allrows_oltp, firstrows_mix

C. allrows_dss, firstrows_oltp

D. allrows_dss

29. Which of the following statements are true about the optimiza-

tion goals?

A. Optimization goals can be set at the server level.

B. Optimization goals can be set at a database level.

C. Optimization goals can be set at a query level.

D. Optimization goals can be set at a session level.

E. All of the above

30. In the following command, what does “10” represent?

set plan opttimeoutlimit 10

A. The number of query plans evaluated for a query

B. The max time spent in executing a query (in milliseconds)

C. The max time spent optimizing a query in milliseconds

D. The max time spent optimizing a query as a percentage of

the total time spent processing the query

31. When the query processor reaches the “optimization timeout

limit,”

A. The query aborts with an error message.

B. The query uses the last query plan it found.

C. The query uses the best available plan and does not throw

any warning or an error message.

D. The query returns a partial result set with a warning.

424 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 456: New.features.guide.to.Sybase.ase.15

32. In the following example, what is going to be the datatype for the

column total_cost?

create table parts_table

(part_no int,

name char(30),

list_price money,

quantity int,

total_cost compute quantity*list_price

)

A. int

B. money

C. numeric

D. None of the above

The next three questions are based on the following code scenario:

create table rental_materialized

(cust_id int, start_dt as getdate()materialized, last_change_dt

datetime)

insert into rental_materialized (cust_id, last_change_dt)

values (1,getdate())

33. What will be the output of the following query:

select start_dt from rental_materialized

A. The getdate() value when the data was inserted

B. The getdate() value when the data was selected

C. Null

D. None of the above

34. The keyword “materialized” in the above example means that:

A. The value of the column is preevaluated and stored in the

table

B. The value of the column is evaluated at the time it is

accessed

C. The value of start_dt is the same as the value stored in

another table named materialized

D. None of the above

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 425

Page 457: New.features.guide.to.Sybase.ase.15

35. If the keyword “materialized” were not used,

A. The default characteristic of start_dt would have been “not

materialized”

B. The default characteristic of start_dt would have been

“materialized”

C. The create table statement would throw an error

D. None of the above

36. Which of the following statements is true about computed

columns?

A. Computed columns can have defaults.

B. Computed columns cannot be null.

C. Both A and B

D. Neither A nor B

37. What is SERVER_NAME.krg file in the $SYBASE directory?

A. It is the first configuration file created during server build

process.

B. It stores the information about the shared memory segments

used by the server.

C. It is the backup of the interface file in an encrypted mode.

D. None of the above

38. System stored procedures are created in which of the databases:

A. master

B. sybsystemprocs

C. model

D. The user-defined databases

The next two questions are based on the following code:

create procedure inv_proc as

create table #tempstores

(stor_id char(4), amount money)

exec insert_inv_proc

go

create proc insert_inv_proc as

insert into #tempstores (stor_id, amount)

values ("abcd", 1)

go

426 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 458: New.features.guide.to.Sybase.ase.15

39. If the above piece of code is executed within the same session in

the given order, the create procedure statement for inv_proc will:

A. Fail because it can’t find the #tempstores

B. Be created with some warning messages

C. Fail because insert_inv_proc doesn’t exist

D. None of the above

40. If the above piece of code is executed within the same session in

the given order, the create procedure statement for

insert_inv_proc will:

A. Be created with some warning messages

B. Fail because it can’t find the #tempstores

C. Be created with no warning messages

D. None of the above

41. Which of the following statements is true about the truncate table

command?

A. It is a non-logged operation.

B. It will fire the delete trigger.

C. It will fire the delete trigger if the table has a clustered index.

D. None of the above

42. Which one or more of the following events happen when a table

is dropped?

A. All the views associated with the table are also dropped.

B. All the triggers on this table are also dropped.

C. All the indexes on this table are also dropped.

D. None of the above

43. Which of the following is true about the tempdb:

A. A user needs to be aliased to guest ID in tempdb in order to

create objects as “guest.”

B. A table created in the tempdb with # prefix can be accessed

by multiple sessions.

C. A table created in the tempdb by the tempdb.. prefix can be

accessed by multiple sessions.

D. All of the above

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 427

Page 459: New.features.guide.to.Sybase.ase.15

44. If copying the data into a table using bcp command fails, some of

the possible causes could be that:

A. The database has a transaction dump scheduled for every 15

minutes

B. The table has an insert trigger

C. The “select into/bulk copy” option for the database was

turned off

D. A database dump was not taken just prior to issuing the bcp

command

45. Which one or more of the following statements is true about

copying data into a table using the bcp command?

A. bcp observes any defaults defined for the column.

B. bcp observes any rules defined for the column.

C. bcp fires the insert trigger on the table.

D. None of the above

46. What does the flag “-e” do in an isql command?

A. It writes the errors into the errorlog.

B. It echoes the input commands.

C. It echoes the output.

D. It is an invalid flag for the isql command.

For questions 47 and 48, consider the following code sample:

declare CSR1 semi_sensitive scroll cursor

for select property_id, owned_date

from Rental..Properties

where property_id < 300

open CSR1

fetch last CSR1

fetch next CSR1

47. What is the value of the global variable @@SQLSTATUS after

the fetch next statement?

A. 2

B. 0

C. –1

428 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 460: New.features.guide.to.Sybase.ase.15

D. 1

E. None of the above

48. Which of the following are true about the cursor after fetch last is

executed?

A. The cursor must be closed after the last row is obtained.

B. All cursor rows are present in the cursor’s worktable after the

fetch last statement.

C. To resume processing the scrollable cursor, fetch first must

be issued to reposition the cursor’s pointer back to the first

row before fetching additional rows from the cursor.

D. Two of the above

E. None of the above

49. Update statistics can be performed at the following levels of

granularity:

A. Column

B. Partition

C. Index

D. Table

E. Three of the above

F. All of the above

50. Which of the following executions of the datachange function

will measure the data changed for the identification column of

the authors table, but only for the authors_part4 partition:

A. select datachange("identification","authors","authors_part4")

B. select datachange("authors_part4","authors","identification")

C. select datachange("authors","identification","authors_part4")

D. select datachange("authors","authors_part4","identification")

E. None of the above

51. Which of the following statements are true about datarows

locking?

A. It is the only lock granularity permitted on system tables.

B. It is the default locking strategy for ASE upon installation.

C. It can help eliminate contention for index pages.

D. None of the above

E. All of the above

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 429

Page 461: New.features.guide.to.Sybase.ase.15

52. Which of the following is not a premium licensed feature?

A. XML in the database

B. Java in the database

C. High Availability

D. Semantic partitions

E. None of the above; they are all premium licensed features

53. ASE supports the following lock granularities:

A. Row

B. Page

C. Partition

D. Table

E. All of the above

F. None of the above; ASE does not lock data

54. The reorg command can be used to perform the following:

A. Reclaim space for a semantic partition on a table with

datapages locking

B. Reclaim space for a semantic partition on a table with

allpages locking

C. Undo row forwarding for a semantic partition on a table with

datarows locking

D. Compact a table that uses allpages locking

E. All of the above

55. The following can be separately bound to a named cache:

A. Database

B. A table’s nonclustered index

C. A semantic partition

D. Column

E. All of the above

56. How many pages are allocated when a nonclustered index is cre-

ated on a non-partitioned table that contains no data?

A. Zero

B. One

C. Four

D. Eight

430 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 462: New.features.guide.to.Sybase.ase.15

E. None of the above

57. Generally, which of the following columns would be the best

candidate to be the partition key for a hash partitioned table?

A. Status column where values are 0 or 1

B. U.S. Social Security numbers

C. Months of the calendar year

D. Days of the week

E. None of the above

58. The “hide-vcc” option of the bcp command is used for what?

A. Hide virtual computed columns

B. Hide virtual control characters

C. Suppress error messages during bcp

D. All of the above

E. None of the above

59. What is the maximum device size for an ASE device?

A. 10 gigabytes

B. 512 gigabytes

C. 1 terabyte

D. 2 terabytes

E. 4 terabytes

60. Which of the following is a search argument (SARG)?

A. where order_number > 12345

B. where customer_name like “%James%”

C. where department_id != 5

D. where commission = sales * 0.06

E. None of the above

61. Which of the following is the most accurate description of the

system table sysslices?

A. Contains information on worker processes

B. Tracks all partition related data in ASE

C. Is only used during the upgrade process

D. Contains timeslice configuration settings for ASE

E. None of the above

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 431

Page 463: New.features.guide.to.Sybase.ase.15

For question 62 reference the following index DDL:

create clustered index authors_clustered_global_index_pk

on authors(au_id)

create clustered index authors_clustered_local_index

on authors(au_id) local index

62. The second clustered index statement will:

A. Fail, since a table cannot have more than one clustered index

B. Succeed, since the first clustered index DDL is invalid as the

index name is over 30 characters

C. Succeed, since the second index is local while the first index

is global

D. Fail, since multiple indexes cannot be created on the same

key

E. None of the above

63. Which of the following ASE databases are optional?

A. tempdb

B. sybsystemdb

C. sybmgmtdb

D. model

E. None of the above

64. Differences between the results of count(column_name) and

count(*) are most likely attributed to what:

A. Old statistics

B. Lack of column statistics

C. Nulls

D. Duplicate data

E. None of the above

65. Consider the following bcp command:

bcp Properties..Rental out

A. The bcp command will fail, as it is incomplete.

B. The bcp command will wait for input from the user.

C. The bcp command will succeed.

432 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 464: New.features.guide.to.Sybase.ase.15

D. The bcp command will succeed, but only if the Rental table

is not a partitioned table.

E. None of the above

66. What set command will provide information similar to the

Graphical Plan Viewer but at the command-line level?

A. set showplan on

B. set statistics io on

C. set option show_best_plan long

D. set statistics plancost on

E. None of the above

67. Which of the following statements are true?

1) directio is a feature that is managed by ASE, not the operat-

ing system.

2) dysnc should be set to false for file system devices used by

temporary databases.

3) If dsync is set to true, directio can be set to either true or

false.

4) directio should be used with raw partitions.

A. 1, 2

B. 2, 4

C. 1, 3

D. 2 only

E. 3 only

68. Which databases are required by any ASE server?

1) master

2) model

3) tempdb

4) sybsystemdb

5) sybsystemprocs

6) sybsecurity

7) sybsyntax

A. 1, 2, 3, 4

B. 1, 2, 3, 5

C. 1, 2, 3, 5, 6

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 433

Page 465: New.features.guide.to.Sybase.ase.15

D. 1, 2, 3, 4, 7

E. 1, 2, 3, 4, 5

F. All of them

69. Which of the following are valid SySAM reports?

1) Raw

2) Raw Data

3) Summary Barchart

4) Server Usage

5) Usage Summary

A. 2, 4

B. 1, 4, 5

C. 1, 3, 4

D. 2, 3, 5

E. 1, 3, 5

70. Which of the following are valid grace period activation

scenarios?

1) At server startup

2) Prior to the license expiration date

3) When a license feature is activated via sp_configure

4) When the license server is down

A. Three of the above

B. Two of the above

C. One of the above

D. All of the above

E. None of the above

71. Which of the following statements is true about

LM_LICENSE_SERVER?

A. It is required for a local license server environment.

B. It contains the name(s) of the redundant remote license

servers.

C. It has to be defined at the time that an ASE server is brought

up if that server is participating in a networked licensed

environment.

D. It contains the host and port number for network license

servers.

434 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 466: New.features.guide.to.Sybase.ase.15

72. When you create a table and do not specify the number of parti-

tions, how many partitions are created and of which partitioning

strategy?

A. One partition using round-robin partitioning

B. One partition using hash partitioning

C. Two partitions using range partitioning

D. Two partitions using round-robin partitioning.

73. Which of the following is not true about table/index partitioning?

A. A table can only be partitioned using one of the four parti-

tioning schemes.

B. A partitioned index can be either global or local.

C. A table partition can be dropped if it is no longer necessary.

D. A table can be altered to increase the number of partitions.

74. Which of the following is true about temporary worktables?

A. order by creates a temporary worktable.

B. group by creates a temporary worktable.

C. select distinct creates a temporary worktable.

D. Temporary worktables can only be created in tempdb.

E. The name of the user-defined temporary worktable can be

found in sysobjects.

75. Which is the lowest granularity of access that can be granted?

A. Database

B. Table

C. Column

D. Row

E. Data value

76. Given the following events, what is the correct order of events

for creating a user database?

A. Define the database

B. Define the devices

C. Define the stored procedures

D. Define the tables

E. Define the triggers

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 435

Page 467: New.features.guide.to.Sybase.ase.15

For questions 77 and 78, consider the following sequence of dump

database and dump transaction commands:

9:00 AM Dump Database

10:00 AM Dump Transaction

11:00 AM Dump Transaction

12:00 PM Dump Database

1:00 PM Dump Transaction

2:00 PM Dump Transaction

77. At 2:30 PM, the DBA attempts to restore the database from the

backups made in this example after it was discovered that a user

accidentally deleted sensitive information. It is discovered the

12:00 PM database dump was accidentally deleted. Up to what

point can this database be recovered?

A. 9:00 AM

B. 10:00 AM

C. 11:00 AM

D. 1:00 PM

E. 2:00 PM

F. This database cannot be recovered.

78. Using the same Dump Database and Dump Transaction sequence

from the previous question, consider the following: At 10:43

AM, sensitive data was deleted from an active high-volume user

table. Considering recovery scenarios, to what point can the data-

base be recovered while preserving the most possible changes to

the database and recovering the deleted sensitive data?

A. 9:00 AM

B. 10:00 AM

C. Just before 10:43 AM

D. 11:00 AM

E. The database cannot be recovered.

79. Which of the following dump database commands correctly

instructs the server to compress the backups?

A. dump database pubs2 to "compress::4::/db_back-

ups/pubs2.dmp"

B. dump database pubs2 to "compress::4::/db_back-

ups/pubs2.dmp" at REMOTE_BACKUP

436 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 468: New.features.guide.to.Sybase.ase.15

C. dump database pubs2 to "/db_backups/pubs2.db" with com-

pression = 4

D. All of the above

E. None of the above

80. What is the default transaction isolation level for ASE at the time

of installation?

A. 0

B. 1

C. 2

D. 3

E. 4

Answers for Sybase ASE 15 Certification SampleQuestions

1. C. name, physname, size

2. B. 2 K, 4 K, 8 K, 16 K

3. A. max memory >= total logical memory

and

D. If you set allocate max shared memory to 1, Adaptive Server

allocates all the memory specified by max memory at startup.

4. B. mon_role

5. D. previous

The additional extensions to the fetch command are next and

prior.

6. C. An independent data change is visible to a sensitive cursor

only when a change to the base table causes a row in the cursor

result set to be inserted or deleted.

There are other conditions that will make an independent data

change visible to scrollable cursors, such as:

• When changes to the base table cause the value of a referenced

column to change.

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 437

Page 469: New.features.guide.to.Sybase.ase.15

• When changes to the base table force a reorder of the rows in a

cursor result set.

7. C. Captures server-wide data to a centralized table in the master

database for all user databases

The QP Metrics information is captured on a per-database basis.

Every database has a sysquerymetrics view, including the system

databases.

8. A. tempdb

QP Metrics are captured to the database where the session origi-

nates, even when a reference is made to a different database.

9. F. Two of the above

A sensitive cursor’s worktable is built as rows are fetched from

the cursor set. Not until all rows are fetched by the cursor, or the

last row is fetched, is the worktable fully populated.

10. C. Limited only by the size of procedure cache

For ASE 15, the 256-member limitation from prior releases of

ASE is eliminated.

11. B. Update the same data row an infinite number of times

Updates to cursor rows are not currently allowed.

12. C. set statistics plancost on

Execution of this command at the command line will provide a

similar breakdown of the query tree in a non-GUI format.

13. D. Group and E. Round-robin

The valid semantic partition types in ASE 15 are range, list, and

hash. The round-robin partition, while a valid partition type, is

not a semantic partition type.

14. D. Two of the above

B and C are correct. The datachange function reports on the per-

centage of data changed on the column, table, index, and levels,

and can be used to scrutinize the level of change at the partition

level on a column.

438 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 470: New.features.guide.to.Sybase.ase.15

15. D. Drop partition

For ASE 15, if a partition is no longer needed, the best option is

to truncate the data in an unused partition.

16. B. dbcc pravailabletempdbs

17. D. Two of the above

The ASE devices can be created as either raw partitions or file

system devices. The dsync option should be set to true for

devices used for temporary databases to avoid the verification of

the write to disk.

18. D. database_3, database_1, database_2

The first two statements set the database recovery order to data-

base_1, database_2, all other databases in database ID order.

The next statement removes database_1 from the specified

recovery order. Therefore, the databases will be restored in the

following order: database_2, followed by the remaining data-

bases in database ID order.

The next statement removes database_2 from the specified

recovery order. Therefore, the databases will be restored in data-

base ID order.

The next statement throws the following error: Msg 18604,

Level 16, State 1: sp_dbrecovery_order: Invalid recovery order.

The next valid recovery order is: 1. Therefore, the databases will

be restored in database ID order.

The next statement sets database_3 as the first user database

to be restored. Therefore, the databases will be restored in the

following order: database_3, followed by the remaining data-

bases in database ID order.

19. B. Two of the above

1 and 4

20. B. 238

17 characters are appended to uniquely identify the table.

21. D. 1, 2, 3

“group by” is an option for the select command. It does not cre-

ate a worktable in tempdb.

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 439

Page 471: New.features.guide.to.Sybase.ase.15

22. B. system

The QP Metrics process uses a view called sysquerymetrics,

which is based on the sysqueryplans table. The sysqueryplans

table resides on the system segment for all databases

23. C. Load the Java source and compile the source within the

database.

Java source code cannot be compiled within the database. To

install a Java class from Java source code, the source code has to

be compiled using a Java compiler outside of ASE to create Java

byte code (in .class files). Those .class files are then saved into an

uncompressed JAR file (for example, using jar cf0), and then the

JAR file is loaded to an ASE database using the installjava

(Unix) or the instjava (Windows) Sybase utility.

24. B. "enable xml"

These XML query functions are not dependent on having Java

Services ("enable java") installed. Only the XML Services

("enable xml") need to be installed.

25. C. "enable cis" and "enable file access"

The create proxy_table command uses Component Integration

Services (CIS). By specifying mapping to an external directory,

the "enable file access" option also needs to be turned on. XML

Services "enable xml" only needs to be turned on if Sybase-spe-

cific XML capabilities are being used like the XML query

functions. If the XML document is just going to be retrieved in

whole, then XML Services are not needed.

26. D. "procedure cache size"

27. A. "heap memory per user"

Heap memory is an internal memory structure created at startup

that tasks utilize to dynamically allocate memory as needed. The

XML query function xmlparse uses this memory as work mem-

ory when creating the indexed or “parsed” XML document.

28. A and D

Currently the three optimization goals are allrows_mix,

allrows_dss, and allrows_oltp.

440 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 472: New.features.guide.to.Sybase.ase.15

29. A, C, and D

Optimization goals can be set at server level, session level, and

query level.

30. D

The parameter 10 is the amount of time ASE can spend optimiz-

ing a query as a percentage of the total time spent processing the

query.

31. C.

The query uses the best available plan and does not throw any

warning or an error message.

32. B

When the columns of different datatypes are used in a computed

column, the resultant column will have the datatype that has the

lowest hierarchy.

33. A

The getdate() value when the data was inserted

34. A

The keyword “materialized” means that the value of the column

is preevaluated and stored in the table as opposed to being evalu-

ated at the time of data retrieval.

35. B

The default characteristic of a computed column is

“materialized.”

36. D

Computed columns cannot have default values and they are

nullable by default.

37. B

SERVER_NAME.krg stores the information about the shared

memory segments used by the server.

38. B

System stored procedures are created in the sybsystemprocs data-

base and can be invoked from any database.

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 441

Page 473: New.features.guide.to.Sybase.ase.15

39. B

The message you will get is “Number (2007) Severity (11) State

(1) Server (SYBASE) Procedure (inv_proc) Cannot add rows to

sysdepends for the current stored procedure because it depends

on the missing object 'insert_inv_proc'. The stored procedure will

still be created.”

40. B

The message you will get is “Number(208) Severity (16) State

(1) Server (SYBASE) Procedure (insert_inv_proc)#tempstores

not found. Specify owner.objectname or use sp_help to check

whether the object exists (sp_help may produce lots of output).”

41. A

truncate table is a non-logged operation.

42. B and C

When a table is dropped, the associated indexes and triggers are

also dropped. The associated views remain in the database but

generate an error when invoked.

43. C. A table created in the tempdb by the tempdb.. prefix can be

accessed by multiple sessions.

44. C

“select into/bulk copy” should be set to true for the database

before attempting to bcp in the data.

45. B and C

When data is copied into a table using the bcp command, the

insert trigger is fired and the rules defined for a column are

followed.

46. B

“-e” option echoes the input commands.

47. A. 2

The fetch next will execute, but no rows will be returned from

the cursor, as the cursor cannot fetch rows that are beyond the

end of the cursor. The @@SQLSTATUS value of 2 in this sce-

nario indicates the end of the cursor is obtained.

442 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 474: New.features.guide.to.Sybase.ase.15

48. B. All cursor rows are present in the cursor’s worktable after the

fetch last statement.

When the last row of a scrollable sensitive cursor is obtained, the

worktable is fully populated and cursor processing will no longer

lock the base table as rows are fetched, only pages/rows belong-

ing to the cursor’s worktable.

49. F. All of the above

With ASE 15, database administrators can update statistics to tar-

get data that is local to a single semantic partition. All other

partition update levels were already allowed on pre-ASE 15

servers.

50. D. select datachange ("authors","authors_part4","identification")

The other answers are all incorrect syntax and will generate an

error from ASE.

51. C. It can help eliminate contention for index pages.

On tables where datarows or datapages locking are employed,

locks on index pages are held as “latches,” which are

transactional locks on index pages. These locks are not held for

the duration of a transaction, thus the reduction in contention for

index pages with datarows locking.

52. A. XML in the database

For ASE 15, XML is available with the base installation of ASE,

and is not dependent upon Java in the database to operate.

53. A, B, D

ASE supports row-, page-, and table-level locking.

54. A, C

For ASE 15, it is possible to perform reorgs at the partition level

on datarows and datapages locked tables.

55. A, B, D

A semantic partition cannot be bound by itself to named cache,

while the other listed objects can be separately bound to a named

cache.

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 443

Page 475: New.features.guide.to.Sybase.ase.15

56. D. Eight

Eight pages is equal to one extent. One extent is the minimum

size for the nonclustered index in this question.

57. B. U.S. Social Security numbers

U.S. Social Security numbers are unique, and the likelihood of

the partitions being equally balanced with unique keys is higher

than the likelihood without. The repetitive key values that are

likely with the incorrect answers will hash to the same partition,

likely resulting in partition skew.

58. A. Hide virtual computed columns.

The hide-vcc bcp option instructs the bcp command to not copy

virtual computed columns. This option can be used for bcp in and

out.

59. E. 4 terabytes

60. A. where order_number > 12345

B is not a SARG since the search contains a like statement. C is

not a SARG since it contains a not equal comparison operator. D

is not a SARG since the search is based on a computation.

61. C. Is only used during the upgrade process

It is a copy of the syspartitions system table used on pre-ASE 15

servers. After an upgrade to ASE 15, this table will not contain

any data.

62. A. Fail, since a table cannot have more than one clustered index

It is not possible to have a clustered global and clustered local

index on the same table.

63. C. sybmgmtdb

The sybmgmtdb is an optional database as it is installed as part of

the Job Scheduler, which is an optional product.

64. C. Nulls

count(*) counts all rows in the table, regardless of duplicate data

or nulls. count does not count null data.

444 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 476: New.features.guide.to.Sybase.ase.15

65. C. The bcp command will succeed

If the Rental table is a single partition table, a single .dat file will

be created. If the table is partitioned, a .dat file will be created for

each partition in the table.

66. D. set statistics plancost on

With set statistics plancost set to on, the query tree along with

estimated and expected I/O counts, and expected rows are dis-

played in a non-GUI friendly format.

67. D. 2 only

dsync and directio are mutually exclusive. They cannot both be

set to true at the same time. If the devices used for temporary

databases are file system devices (which is the Sybase recom-

mendation) — not raw partitions — dsync and directio should

both be set to false.

68. E

master, model, tempdb, sybsystemdb, sybsystemprocs

69. E

Raw, Summary Barchart, Usage Summary

70. D. All of the above

71. B, C, and D

Each of these characteristics defines a network license server

environment.

72. A. One partition using round-robin partitioning

73. C

A partition cannot be dropped. In ASE 15, the table has to be

rebuilt in order to drop a partition.

74. E

The order by, group by, and distinct clauses now take advantage

of a hash-hybrid algorithm to satisfy these clauses as opposed to

the creation of a worktable in tempdb. In a multiple temporary

database environment, temporary worktables are created in the

login assigned temporary database. User-defined temporary

Appendix A: Sybase ASE 15 Certification Sample Questions and Answers | 445

Page 477: New.features.guide.to.Sybase.ase.15

worktables can be found in sysobjects in the temporary database

that the user is assigned to at login time.

75. D

Access can be granted to the row level by using a view that limits

the rows that are brought back to the client.

76. B, A, D, E, C

77. C. 11:00 AM

As the last full database dump is invalid, it cannot be recovered,

nor can the proceeding transaction dumps. The DBA must

refresh from the 9:00 AM full database dump, and apply the

10:00 and 11:00 AM transaction log dumps to recover to the lat-

est possible point, which is 11:00 AM.

78. C. Just before 10:43 AM

The database can be recovered with load database from the 9:00

AM full backup, followed with a load transaction of the 10:00

AM backup, and completed with a restore of the 11:00 AM

backup with the “until_time” parameter set to about 10:42 AM.

79. A and C

B is not correct since compressed backups for dumps to remote

servers must specify compression with the compression = 0-9

option.

80. B. 1

The default isolation level at the time of installation with the

default server configurations is 1.

446 | Appendix A: Sybase ASE 15 Certification Sample Questions and Answers

Page 478: New.features.guide.to.Sybase.ase.15

Appendix B

Use Cases

Throughout this book, many of the new ASE 15 features as well as

some late 12.5.x changes are covered from a technical standpoint. To

supplement some of the material offered in the book with practical

applications, this appendix demonstrates the use of ASE 15 new fea-

tures in real business scenarios.

This appendix presents three business cases, employing the

following ASE 15 new features:

� Case 1:

• Semantic partitions

• Computed columns

� Case 2:

• XML

• Computed columns

• Functional indexes

� Case 3:

• Scrollable cursors

Appendix B: Use Cases | 447

Page 479: New.features.guide.to.Sybase.ase.15

Business Case 1For business reasons, Company X must maintain a large amount of

historical data within ASE’s databases available to the user base. As

the volume of data expands over time into hundreds of gigabytes of

space utilization, the maintenance windows continue to increase and

performance degradation is apparent. Additionally, the business has

evaluated several possible scenarios on how to manage this data

explosion, while balancing data availability, maintenance windows,

and complexity of implementation.

For Company X four basic options are proposed, each with its

own inherent advantages and some limitations. The options presented

assume the company cannot simply delete old data or take historical

data into an offline state. The options also assume all data will be

retained in one ASE server in order to avoid additional infrastructure

and to control license costs.

� Option 1: One VLDB with large tables self contained

� Option 2: Primary database and an archive database for the larg-

est tables

� Option 3: Several small databases, each holding a portion (none

overlapping) of the data from the very large tables

� Option 4: One VLDB with a large table, partitioned semantically

** Option 4 is possible due to changes to ASE in version 15.

How can ASE 15 help this organization? With semantic partitions,

partition-level maintenance (partition reorgs, statistics, dbccs), local

indexes, and partition-aware optimization.

448 | Appendix B: Use Cases

Page 480: New.features.guide.to.Sybase.ase.15

Option 1: One VLDB with large tables self

contained

Architecture Overview

With this solution, one table — potentially very large — resides

within one database on one ASE server. Users access this one table

directly to satisfy queries. Database administrators perform mainte-

nance to the large database and large tables as a whole. For very large

APL locked tables, reorganization of the tables must be performed

with a drop and recreate of the clustered index. In order to accom-

plish a drop and recreate of the clustered index on the large table,

upward of 200% of the table’s size must be available as free space

within the database to recreate the clustered index and thus eliminate

any fragmentation. For DOL locked tables, reorg rebuild operations

must also maintain up to 200% of large table size to complete reorg

rebuild operations. dbccs must be performed on the single database

or table as a whole. Update statistics must be performed, targeting the

specific indexes or columns, table wide. In many instances, the

update statistics operations and the reorganization of fragmented data

is not possible with large amounts of data due to the maintenance

window requirements to perform these operations.

Appendix B: Use Cases | 449

Page 481: New.features.guide.to.Sybase.ase.15

Advantages

� No special programming logic to access the one large table

� No need to create a view to span multiple databases or tables to

simplify application development

� Initially the simplest solution to implement

Limitations

� Maintenance performed on the large tables and databases as a

whole

� dbccs, reorgs, index maintenance, and update statistics operations

must address full tables and/or databases

� Optimizer must consider table as a whole when useful index is

not present

� Must carry overhead of extra free space within the database to

perform reorg/index rebuilds for largest table

� Must physically delete old data from the production database

� Cannot make older data read-only

� Recovery operations can be more time consuming than other pro-

posed solutions as Backup Server must load database as a whole,

including allocated but not used database pages when performing

a full restore

450 | Appendix B: Use Cases

Page 482: New.features.guide.to.Sybase.ase.15

Option 2: Primary database and an archive

database for the largest tables

Architecture Overview

With this solution, two databases are created. One database contains

a table that holds the data that is used to provide results for a period

of time that is considered “current.” In this use case the contents of

that table provide results for 85% of the application queries. The sec-

ond database contains a large table — potentially very large —

containing data that is maintained for historical purposes, such as

contract requirement, government requirement, etc. Database admin-

istrators perform maintenance on the two databases and the two

tables as a whole. For the smaller database/table, the response time of

the application is of major concern as is any impact by database

maintenance activities.

For a very large APL locked table in the larger database, reorga-

nization of the table must be performed with a drop and recreate of

the clustered index. In order to accomplish a drop and recreate of the

clustered index on the large table, upward of 200% of the table’s size

must be available as free space within the database to recreate the

clustered index and thus eliminate any fragmentation. Even though

the smaller database/table requires the same amount of extra space

for recreating the clustered index, the fact that the entire table is

locked during the index rebuild will affect the uptime of the

application.

Appendix B: Use Cases | 451

Page 483: New.features.guide.to.Sybase.ase.15

For DOL locked tables, reorg rebuild operations must also main-

tain up to 200% of large table size to complete. dbccs must be

performed on both databases and tables as a whole. Update statistics

must be performed targeting the specific indexes or columns, table

wide. In many instances, the update statistics operations and the reor-

ganization of fragmented data is often not possible with large

amounts of data due to the maintenance window requirements to per-

form these operations.

The performance on the current data database will be affected

while the data is copied to the historical database and then dropped

from the current database/table.

For queries that need to access data from both the current and the

historical tables, the data can be unioned. For those queries where the

data is from either one or the other database, a mechanism has to be

developed and maintained that tracks where the date ranges occur.

This mechanism has to be accessed to determine the database to use

to resolve the query each time data is requested. In some rare cases,

the request may have to be broken down into two queries to be

resolved, particular if the data is being archived at the time of the

request.

Advantages

� Maintenance can be performed on current data, as current data is

contained in only the primary database.

� Application development could be simplified with the creation of

a view of the “same” table across the current and history

databases.

� The archive database can be placed into a read-only mode.

� Maintenance of the archive database will decline dramatically

when in read-only mode as changes are not permitted, except for

when data is archived on a periodic basis from the primary

database.

Limitations

� Must carry out joins across two databases if a query needs simul-

taneous access to current and history data

� Cannot update history data with database in read-only mode

452 | Appendix B: Use Cases

Page 484: New.features.guide.to.Sybase.ase.15

� Special programming logic to access the multiple tables across

two databases due to the introduced physical fragmentation

� Overall, more difficult for DBA to manage

� Often, downtime is scheduled to perform archive operations from

the primary to the secondary database. If downtime is not avail-

able with this option, additional resource utilization can take

place during the archive of data from the primary to archive

database.

Option 3: Several small databases, each

holding a portion (none overlapping) of the data

from the very large tables

Architecture Overview

In this option, smaller tables are created in several smaller databases

to handle the data from an otherwise very large table. As the data is

loaded into the appropriate table in a monthly database, the other

tables are not affected. Only queries accessing the data in that month

are blocked from accessing the new data during the load process. As

in the previous option, a mechanism has to be developed and main-

tained in order for the application to determine where to find the

Appendix B: Use Cases | 453

Page 485: New.features.guide.to.Sybase.ase.15

requested data. As data is aged and no additional data is added to the

older tables, limited maintenance will be necessary against the older

static tables and it may be possible to set the older databases as read

only. Clustered indexes will only need to be rebuilt against the cur-

rent data. This reduces the maintenance windows since only one table

or database requires maintenance to be performed against it.

One last issue is that the next database/table has to be created by

the DBA prior to when it is needed. If the DBA fails to create the

database or table, the data load process will fail.

Advantages

� Ability to backup/restore at a more granular level

� Ability to make databases with older data read-only

� May be feasible to create a view to join together the same tables

that are fragmented across multiple databases

Limitations

� Introduces fragmentation at the table level into the system design

� Need for special logic to access data that scans horizontally parti-

tioned tables that physically span multiple tables in multiple

databases

� Overhead must be carried out in each individual database to

accomplish reorgs and index maintenance

� More overhead for database administrators to maintain additional

databases

454 | Appendix B: Use Cases

Page 486: New.features.guide.to.Sybase.ase.15

Option 4: One VLDB with a large table,

partitioned semantically

Architecture Overview

With ASE 15, this option is now possible. Although the data will be

stored in one large table, partitioning allows for maintenance to be

performed on only the necessary partitions. Also, the application does

not need to be aware of where the data is located — on which data-

base and in which table. The optimizer can eliminate unnecessary

access to partitions where the data is not needed to satisfy the request.

Since the partition key only needs to know which month is being

stored, a materialized computed column can be created that uses only

the month part.

Advantages

� No special programming logic to access segregated data

� Only one table/database to access

� Maintenance operations, such as reorgs, dbccs, index mainte-

nance operations, and update statistics, can be targeted to very

specific subsets of data

� Partition-aware optimization. Optimizer will eliminate from con-

sideration the partitions not needed to satisfy a query

Appendix B: Use Cases | 455

Page 487: New.features.guide.to.Sybase.ase.15

� Start_date can be an indexed computed column for use by the

Query Optimizer

Limitations

� Backups/restores of database can be time consuming

� Cannot drop old partitions (old data), only truncate

� Cannot make portions of the data read-only

� Database administrator should monitor for partition balance with

some partition types

Business Case 2Company Y’s system must now consider the integration of XML

document storage within their relational database. The company

needs the ability to quickly scan for keywords within the XML docu-

ments in order to determine which documents to retrieve.

How can ASE 15 help this organization? With consideration to

XML storage methods (internal vs. external) to the database, com-

puted (materialized) columns, and functional indexes.

Option 1

If database storage isn’t an issue and only a few elements within the

XML documents are needed to retrieve the documents, then the easi-

est way to accomplish the above goal is to create a table with a text

datatype to store the XML documents and computed columns based

on elements within the XML documents. For example:

create table xml (xml text,

ID int compute xmlextract('//ns0:geographyID/text()', xml)

materialized)

By making those columns materialized, they can have functional

indexes added on them. This speeds up the ability to find the correct

XML documents without having the overhead of indexed “parsed”

XML documents.

456 | Appendix B: Use Cases

Page 488: New.features.guide.to.Sybase.ase.15

Advantages

� Fast access to data within the XML documents where a func-

tional index is employed

� Storage requirement requires little more than the actual raw size

of the actual XML document

Limitations

� Slow access to data within the XML documents for large docu-

ments and the data is not contained within a materialized column

� Extended time to load the XML data into ASE for large docu-

ments materializing a number of columns

Option 2

If the number of elements needed to find the XML documents

becomes great, the number of materialized columns can be limited to

one or two. Then the xmlextract function can be used within the

where clause to find the XML documents. For example:

where ID = 100 and

xmlextract('//ns0:geographyID/text(), xml) = 200

This might create a performance issue if the XML documents are too

large. To increase performance, the XML documents would need to

be indexed or “parsed” using the xmlparse function.

Advantages

� Fast access to data within the XML documents

� More flexibility in changing the columns being used in the where

clause without causing performance issues

Limitations

� The storage requirements are more than double those of Option

1.

� The memory requirement and the time needed to load the

indexed XML documents increases exponentially with the size of

the documents.

Appendix B: Use Cases | 457

Page 489: New.features.guide.to.Sybase.ase.15

Option 3

If database storage becomes an issue, an option would be to store the

XML documents in a compressed form in an image datatype. This

can only be done if the needed elements can be extracted from the

XML documents outside of the database and then inserted into sepa-

rated columns within the table. This gives the same ability to retrieve

the XML documents as Option 1, but places the work to extract the

elements within the XML documents on the application outside of

the database. It also forces the application to compress and

uncompress the XML documents.

Advantages

� Considerably less storage requirements within ASE

� XML in the database license not required

Limitations

� Cannot use the xmlextract function within the where clause for

data with the XML documents

� Additional logic needed in application logic to store and retrieve

XML documents

Option 4

Another way to solve a database sizing issue is to store the XML

documents into a file system directory and map the XML documents

to a logical proxy table. A view can then be created to eliminate the

unnecessary columns from the proxy table and to add additional col-

umns based on the XML document file name and/or data within the

XML document.

With this option, users cannot simultaneously insert additional

columns for data not contained within the XML document. All the

other options would allow descriptive data to be added within the

same row as the XML document. This data could then be used to

query for the XML document. Within this option, there is no physical

table within the database containing the XML document; therefore,

descriptive data would have to be contained within its own table and

related to the XML document via some ID common to both. The

458 | Appendix B: Use Cases

Page 490: New.features.guide.to.Sybase.ase.15

appearance of the XML document through the proxy table and the

inserting of this descriptive data cannot be wrapped within a database

transaction.

The following is an example of creating a proxy table and view:

create proxy_table _xmlDirectory external directory at

"/remotebu/disk_backups/xmlDirectory"

create view xmlDirectory

(ID, code, xml) as

select convert(integer, substring(filename, 6, 3)),

convert(varchar(10), xmlextract('//ns0:geographyID/text()',

content)), content

from _xmlDirectory

Advantages

� No need to store XML into ASE

� No memory requirements for storing XML into ASE

Limitations

� Users cannot simultaneously insert additional columns for data

not contained within the XML document

� Slow access to data within the XML documents for large

documents

� The "enable file access" and "enable cis" options have to be set

for this option

Business Case 3

Background on Current Architecture

Company Z runs an order processing system. Due to limitations of

their physical production facilities, a request was created by manage-

ment to indefinitely postpone 50 customer orders, beginning with the

50th order beyond the current order. Management believes this arbi-

trary delay of 50 orders will allow for the facility to catch up with

current orders. To identify and prevent the production of the delayed

orders, the customer_order date will be future dated to “12/31/3000.”

Because the orders table acts as a queue for the processing of orders,

as the 50 orders with a future start date are obtained, production at the

Appendix B: Use Cases | 459

Page 491: New.features.guide.to.Sybase.ase.15

facility will begin to slow as the new orders cannot begin to process

due to the future dated start date associated with the order.

Proposed Solutions

The IT developer assigned to this project at Company Z considers

three alternatives in order to accommodate this request. The first is

the use of #temp tables in order to isolate the 50 rows that need their

order dates updated. With this method, rows are extracted from one

or more user tables and from one or more databases into a temporary

table in the tempdb database. Within the tempdb database, a user pro-

cess continues by isolating the 50th through 100th rows, with “set”

logic and “select into” along with the creation of additional #tempo-

rary tables. Once the 50th through the 100th rows are isolated,

updates are applied to the base table(s).

In the second example, the IT developer considers the use of

standard pre-ASE 15 cursors. Using this methodology, a process

reads 1,000 or more rows into the nonscrollable cursor. Since the

developer is only interested in the 50th through the 100th rows of the

cursor, additional processing is necessary. In order to target the 50th

row, the cursor must fetch and discard the 1st through the 49th cursor

rows before obtaining the needed 50th row. Once the 50th row of the

cursor is obtained, processing proceeds by updating the base table

where the current cursor row relates to the rows in the base table(s).

The third option is only possible with ASE 15.0 and greater. The

third option employs the use of a scrollable cursor to directly position

the cursor pointer at the 50th row in the cursor. From the 50th row

forward, the cursor can traverse forward until all necessary rows are

accessed and the corresponding base table is updated.

Note: While the scenario presented in this business case is fictional and

may not be realistic, the message the authors convey is the simplicity

and elegance presented with the scrollable cursor in comparison to the

complexity associated with the options where a more traditional cursor or

temporary table are employed.

460 | Appendix B: Use Cases

Page 492: New.features.guide.to.Sybase.ase.15

Option 1

To facilitate this scenario, a temporary table called #order_temp is

generated from a select into statement:

-- build the temp table with all orders that have not started

processing.

select * into #orders_temp

from orders

where order_date > (select max(order_date)

from orders

where processed = "Y")

and processed = "N"

Next, the first 50 rows would need to be deleted from the temporary

table:

set rowcount 50

delete #orders_temp

go

Next, move the 50 target rows to a new temporary table called

#order_temp2:

-- set rowcount remains at 50

select * into #order_temp2

from #order_temp

go

set rowcount 0 -- rowcount of 50 no longer needed

go

delete #order_temp -- table no longer needed, the 50

go -- rows needed are now in table

-- #order_temp2

Using the same interim set, update the 50th to 100th rows in the

orders table:

update Accounts..orders

set order_date = "12/31/3000"

from orders c,

#order_temp2 t

where t.order_id = c.order_id

Appendix B: Use Cases | 461

Page 493: New.features.guide.to.Sybase.ase.15

Processing is complete; remove the #temporary table with drop table:

drop table #order_temp2

go

Advantages

� Can be performed on any supported version of ASE

� The temporary table could be made to persist beyond the scope

of the user session if necessary.

� Can take multiple passes at each row in the #temporary table if

needed

Limitations

� Moderate complexity

� Must complete multiple steps to isolate the target rows

� Locks briefly held on tempdb’s system tables

Option 2

For Options 2 and 3, the following diagram represents the rows

needed by the cursor.

Cursor CSR1

462 | Appendix B: Use Cases

Page 494: New.features.guide.to.Sybase.ase.15

Declare and open the cursor to process the rows:

declare CSR1 cursor for

select order_id, order_date

from orders

where order_date > (select max(order_date)

from orders

where processed = "Y")

and processed = "N"

open CSR1

Set the cursor rows to 50 in order to fetch the first 50 rows, and dis-

card them:

set cursor rows 50 for CSR1

fetch CSR1

Reset cursor rows to 1 since the rows to process are next:

set cursor rows 1 for CSR1

Fetch the cursor rows into local variables, then use the variables to

update the base table within a while loop to perform the update 50

times:

declare @order_id int,

@order_date datetime,

@counter int

select @counter = 50

while (@counter > 0)

begin

fetch CSR1

into @order_id, @order_date

update Accounts..orders

set order_date = "12/31/3000"

where order_id = @order_id

select @counter = @counter -1

end

Appendix B: Use Cases | 463

Page 495: New.features.guide.to.Sybase.ase.15

Verify the correct rows were updated:

select * from orders where order_date = "12/31/3000"

Close and deallocate the cursor:

close CSR1

deallocate CSR1

Advantages

� Can be performed on any supported version of ASE

Limitations

� Additional steps to fetch and discard the unneeded rows

� Can only take one pass at each row if multiple passes per row

were necessary

� Moderate to low complexity

� Must complete multiple steps to isolate the target rows

Option 3

Use a scrollable cursor to set the cursor position directly at the 50th

row in the result set.

Declare and open the cursor to process the rows:

declare CSR1 scroll cursor for

select order_id, order_date

from orders

where order_date > (select max(order_date)

from orders

where processed = "Y")

and processed = "N"

open CSR1

Fetch the cursor rows into local variables, then use the variables to

update the base table within a while loop to perform the update 50

times. Use the fetch orientation of absolute to skip to the 50th cursor

row and to control the cursor position.

declare @order_id int,

@order_date datetime,

464 | Appendix B: Use Cases

Page 496: New.features.guide.to.Sybase.ase.15

@counter int

select @counter = 50

while (@counter <= 100)

begin

fetch absolute @counter CSR1

into @order_id, @order_date

update orders

set order_date = "12/31/3000"

where order_id = @order_id

select @counter = @counter + 1

end

Verify the correct rows were updated:

select * from orders where order_date = "12/31/3000"

Close and deallocate the cursor:

close CSR1

deallocate CSR1

Advantages

� Low complexity

� Most elegant solution; can directly access the rows needed in the

result set without special processing

� Can take multiple passes at each row if necessary

Limitations

� Can only create and manipulate scrollable cursors on ASE 15

and greater

� Will use tempdb to hold the scrollable cursor

Appendix B: Use Cases | 465

Page 497: New.features.guide.to.Sybase.ase.15

This page intentionally left blank.

Page 498: New.features.guide.to.Sybase.ase.15

Index

@@cursor_rows, 52, 149@@fetch_status, 52, 149@@optgoal, 170@@rowcount, 148-149@@setrowcount, 52@@tempdb, 372@@textdataptnid, 52@@textptnid, 52

Aabsolute extension, 151abstract plans, 207

evaluating, 195allrows_dss optimization goal, 169-170allrows_mix optimization goal, 169allrows_oltp optimization goal, 168-169,

179-181, 208-209alter table command, 125-131, 235-236ASE,

configuring for semantic partitioning,62-63

Java and, 11pre-15 features, 10-11, 13-19pre-15 SySAM features, 298using tempdb in pre-15, 360XML and, 11

ASE 15,and computed columns, 235-239implementing SySAM in, 312-313installing, 9MDA tables in, 382-383new configuration parameters, 51new features, 4-9, 19-44new functions, 48-50new global variables, 52optimizer, 124partition support in, 63-64query processor improvements,

174-184sample certification exam, 11, 417-446using SySAM with, 299using tempdb in, 361-364

ASE installation, 9, 323-324of components, 330-332preinstallation process, 324-325

preinstallation process for Unixenvironments, 332-333

using dataserver executable, 352-356using GUI, 332-352using srvbuild executable, 332-352with resource file, 325-329

audit_event_name() function, 48automatic database expansion, 15-18Automatic Update Statistics, 22

Bbcp utility, 39-43

using to alter table, 125bcp client, 42-43biginttohex() function, 48business cases, 448-456, 456-459, 459-465by range keyword, using, 126-128

Ccertification exam, sample, 11, 417-446clustered non-partitioned non-prefixed

index,on hash partitioned table, 108-110on list partitioned table, 101-103on round-robin partitioned table,

106-108clustered non-partitioned prefixed index,

on hash partitioned table, 110-112on list partitioned table, 99-101on round-robin partitioned table,

104-106clustered non-prefixed index, on range

partitioned table, 97-99clustered prefixed index, on range

partitioned table, 95-97computed column index, 8, 242

and optimizer statistics, 251benefits of using, 246-248creating, 243guidelines for using, 246impact on existing code, 249restrictions on using, 248showplan output, 244-245using with tempdb, 248when to use, 250-251

Index | 467

Page 499: New.features.guide.to.Sybase.ase.15

computed columns, 7, 68, 225-226and ASE 15 updates, 235-239and nullability, 233benefits of using, 231-233deterministic and materialized, 229deterministic and nonmaterialized, 229guidelines for using, 233-234nondeterministic and materialized, 230nondeterministic and nonmaterialized,

230-231using to change format of data,

232-233using to compose and decompose

datatypes, 231-232configuration parameters

for MDA tables, 377-379in ASE 15, 51

count_big() function, 48create clustered index command, 112-113create index command, 243, 251-252create table command, 92-93, 235cursor scrollability, 146-148cursor sensitivity, 6, 145-146, 152-157

and tempdb, 159-160exceptions to, 157-158

cursors,and locking behavior, 158optimization for, 164scrollable, see scrollable cursorssensitive vs. insensitive, 163

Ddata, 3

deleting from partitions, 86fragmentation of, 192-193inserting into partitions, 84-86partitioning, 54, 90-93skew, 21, 56updating in partitions, 86

data partition, 58altering, 124-125

data_pages function, 87, 141data_pgs function, 50database, automatic expansion of, 15-18database consistency checks,

partition-specific, 35-38datachange function, 4, 20, 22-23, 191

reasons for using, 27-28using, 23-27using with semantic partitions, 28

datatype hierarchy, 226datatype mismatch, resolving, 175-177

datatypes, 47composing and decomposing, 231-232using Java classes as within

Transact-SQL, 399-400dbcc checkindex, 36dbcc checktable, 35-36dbcc indexalloc, 37dbcc pravailabletempdbs, 368-369dbcc tablealloc, 37-38dbcc traceflags, 188dbcc traceon 311, as alternative to QP

Metrics, 262deterministic and materialized computed

columns, 229deterministic and nonmaterialized computed

columns, 229deterministic property, 228-229

and materialization, 229-231directed joins, 122directio parameter, using in tempdb,

361-363disk init command, 45distribution steps, insufficient, 191-192drop partition command, 44DSS queries vs. OLTP queries, 167-168

Eequi-partition, 59-60equi-partitioning, 53execute immediate, using with QP Metrics,

270-271

Ffetch command, 148

extensions, 147, 151-152flush command, 266-270forceplan, 194functional index, 7-8function-based index, 8, 251

and optimizer statistics, 258benefits of using, 253creating, 251-252guidelines for using, 252-253impact on existing code, 257restrictions on using, 256showplan results, 253-256using with tempdb, 256-257when to use, 257

functions,deprecated in ASE 15, 50-51new in ASE 15, 48-50

468 | Index

Page 500: New.features.guide.to.Sybase.ase.15

Ggid, using to differentiate QP Metrics data,

280-281global index, 60, 113-114

partitioning, 114-122global non-clustered prefixed index,

on hash partitioned table, 120-122on list partitioned table, 116-118on range partitioned table, 114-116on round-robin partitioned table,

118-120global variables,

and scrollable cursors, 148-149new in ASE 15, 52

GPV, see Graphical Plan ViewerGraphical Plan Viewer, 9, 287-294

as alternative to QP Metrics, 261group by clause, 181

Hhash partitioned table,

clustered non-partitioned non-prefixedindex on, 108-110

clustered non-partitioned prefixedindex on, 110-112

global non-clustered prefixed index on,120-122

hash partitioning, 70-72using, 72-74

hash partitions, inserting data into, 86hash_join optimization criteria, 172heartbeat, 299, 309hextobigint() function, 49histograms, using with joins, 179-181Historical Server, as alternative to MDA

tables, 374-375horizontal parallelism, 175horizontal partitioning, 35, 56HTML, storing in the database, 406

Iidentifiers, limits for, 45-46index,

computed column, see computedcolumn index

functional, see functional indexfunction-based, see function-based

indexglobal, see global indexlocal, see local indexnon-prefixed partitioned, see

non-prefixed partitioned index

prefixed partitioned, see prefixedpartitioned index

index force, inappropriate, 194index key, updating, 258index partition, 58index partitioning, 54-55, 93-94

user-defined, 112-113indexes, retrieving information about,

258-259information, 3installjava utility, using, 398Interactive SQL, and Graphical Plan

Viewer, 287-294intra-operator parallelism, 58is_quiesced() function, 49

JJAR files, creating, 397-398Java,

and ASE, 11configuring memory for, 398-399

Java classes,and static variables, 401creating, 397-398installing, 397-399using as datatypes within

Transact-SQL, 399-400using in the database, 396

Java in the database, 396-401Java methods,

executing, 400-401using in the database, 396

Job Scheduler, 18installing, 18-19, 330-332

join predicates, 241-242joins, using histograms with, 179-181

Llicense,

activation, 309-310acquiring, 306-309compliance, 297, 321

license file (SySAM), 302-303license key, acquiring, 308license server, see SySAM serverlicensing options, 309-310list partitioned table,

clustered non-partitioned non-prefixedindex on, 101-103

clustered non-partitioned prefixedindex on, 99-101

Index | 469

Page 501: New.features.guide.to.Sybase.ase.15

global non-clustered prefixed index on,116-118

list partitioning, 75-76using, 76-78

list partitions, inserting data into, 86lmutil, 300local index, 31, 60, 94-95

on partitions, 123partitioning, 95-113user-defined partitioning, 112-113using, 31-32

locking, 19and cursors, 158

lower bound, 59

Mmaintenance schedules, 29-30materialization, 226-228

and deterministic property, 229-231worktable, 160-163

materialized column, 226-228MDA tables, 10-11, 373-374, 380-381

alternatives to, 393as alternative to QP Metrics, 261changes in ASE 15, 382-383configuration parameters, 377-379enabling data collection of, 379-380Historical Server as alternative to,

374-375installing, 376queries in, 391-392using SQL in, 391

memory, configuring for Java, 398-399merge_join optimization criteria, 171merge_union_all optimization criteria, 171merge_union_distinct optimization criteria,

171metrics, see QP MetricsMonitoring and Diagnostic Access tables,

see MDA tablesmulti_table_store_ind optimization criteria,

171multiple temporary databases, 10, 14,

359-360 see also tempdbreasons for using, 360-364

Nnative data encryption, 14-15networked license server (SySAM), 305next extension, 151nondeterministic and materialized computed

columns, 230

nondeterministic and nonmaterializedcomputed columns, 230-231

nondeterministic property, 228-229nonmaterialized column, 226-228non-prefixed partitioned index, 60-61non-semantic partitioning, 56nullability in computed columns, 233number of open partitions configuration

parameter, 34, 63

OOLTP queries vs. DSS queries, 167-168opportunistic_distinct_view optimization

criteria, 171optimization criteria, 170-172

performance analysis, 210-212optimization goal, determining, 170optimization goal configuration parameter,

168-170optimization goal performance analysis,

208-209optimization timeout analysis, 212-216optimization timeout limit, 172-174optimization timeout problems, resolving,

216-217optimization timeout, setting, 214-215optimization tools, using, 208-217optimizer, see query processor and Query

Optimizeroptimizer statistics,

using with computed column index,251

using with function-based index, 258options file (SySAM), 303or queries, 182order by clause, 181

Pparallel_query optimization criteria, 171parallelism, 175partition bound value, 59partition elimination, 123, 124, 177-179partition ID, 61partition imbalance, resolving, 193partition information, retrieving, 134-141partition key, 58-59

modifying, 131-133partition level, updating statistics at, 20-22partition maintenance, 124-133partition skew, 21, 218partition spinlock ratio configuration

parameter, 34, 63

470 | Index

Page 502: New.features.guide.to.Sybase.ase.15

partition support in ASE 15, 63-64partition_id() function, 49, 61, 89, 142partition_name() function, 49-50, 142partitioning, 55

benefits of, 56-57data, 54effects of, 143functions that support, 87-89goals of, 55-56hash, see hash partitioninghorizontal, 35, 56index, 54-55list, see list partitioningnon-semantic, 56range, see range partitioninground-robin, see round-robin

partitioning,semantic, 54-55strategies for, 83, 89-93striped, 81table, 53types of, 64vertical, 57

partition-level utilities, 33-34partitions,

adding to a table, 126-130altering, 124-125balancing, 218-219changing number of, 126deleting data from, 86dropping, 44, 130-131inserting data into, 84-86updating data in, 86using query processor with, 122-124using reorg command with, 38-39using with truncate table command,

43-44partition-specific statistics, 123-124performance, monitoring, 276-277Plan Viewer, see Graphical Plan Viewerprefixed partitioned index, 60prior extension, 151properties file (SySAM), 303-304ptn_data_pgs function, 50

QQP Metrics, 8, 261, 262-263

alternatives to, 261-262dropping, 283-284enabling capture, 265limitations of, 286using with execute immediate, 270-271

using with triggers, 270using with views, 270

QP Metrics capturesyntax, 281-283vs. statistics io, 284

QP Metrics data,and space utilization, 285comparing, 276-280flushing from stored procedures,

266-270separating by gid, 280-281storing, 271-273using, 273-275

queries, monitoring, 6-7Query Optimizer, 6-7query optimizer cost algorithm, 198-200query performance issues,

causes of, 190-191diagnosing, 188-190resolving, 191-195

query plan,generating in isql, 294-295viewing, 287-294

Query Processing Metrics, see QP Metricsquery processing, monitoring, 6query processor,

in ASE 15, 174-184set options, 195-196, 200-207using with partitions, 122-124

Rrange cell density, 191-192range partitioned table,

clustered non-prefixed index on, 97-99clustered prefixed index on, 95-97global non-clustered prefixed index on,

114-116range partitioning, 65-67

using, 68-70range partitions, inserting data into, 84-86redundant license server (SySAM), 305-306relative extension, 152reorg command, 38-39reorg rebuild command, 39reserved_pages function, 87, 141reserved_pgs function, 50resource governor, 285round-robin partitioned table,

clustered non-partitioned prefixedindex on, 104-106

clustered non-partitioned non-prefixedindex on, 106-108

Index | 471

Page 503: New.features.guide.to.Sybase.ase.15

global non-clustered prefixed index on,118-120

round-robin partitioning, 64, 78-80using, 81-83

row locked system catalogs, 19-20row_cnt function, 50row_count function, 88-89, 142running group, 276

comparing metrics between, 277-280

SSamreport, 300-301scrollability, cursor, see cursor scrollabilityscrollable cursors, 5-6, 145-146

and global variables, 148-149sensitivity of, 364using in tempdb, 364worktable materialization with,

160-163security enhancements, 14-15semantic partitioning, 54-55

configuring ASE for, 62-63semantic partitions, 5, 28, 35, 57, 61-62

deleting data from, 86inserting data into, 84maintenance schedules with, 29-30problems with, 217-222updating data in, 86using datachange function with, 28

sensitive cursors, worktable materializationwith, 160-163

server-level options, 193session-level options, 193set metrics_capture command, 265set option show command, 187-188set plan for show_execio_xml statement, as

alternative to QP Metrics, 262set showplan on command, 194set statistics io on command, 194set statistics plancost on command,

220-222, 294-295as alternative to QP Metrics, 261

set statistics time on command, 194-195,220-222

show_abstract_plan set option, 204-207show_best_plan set option, 197-198show_elimination set option, 203-204show_missing_stats set option, 200-203showplan output,

computed column index, 244-245function-based index, 253-256

software license compliance, 297, 321

sort order, user-defined, 232-233sp_checksource stored procedure, 237sp_configure system procedure, 265sp_cursorinfo system procedure, changes to,

150sp_dbextend system procedure, 15-17sp_help stored procedure, 134, 237-239sp_helpartition stored procedure, 134sp_helpcomputedcolumn stored procedure,

237-239sp_helpindex stored procedure, 32-33, 134,

237sp_helptext stored procedure, 237sp_hidetext stored procedure, 237sp_lmconfig stored procedure, 310-311sp_lock stored procedure, 189sp_metrics stored procedure, 266, 272,

283-284sp_sysmon stored procedure, 189

as alternative to MDA tables, 393sp_tempdb stored procedure, 368SQL, using in MDA tables, 391SQL result set, converting to XML result

set, 410standalone license server (SySAM), 304star queries, 182-184stateful table, 383-384

managing data in, 385-390static variables and Java classes, 401statistics,

automatic updating of, 22effect of invalid, 220-222missing or invalid, 191updating at partition level, 20-22

statistics io statement, as alternative to QPMetrics, 262

statistics io vs. QP Metrics capture, 284statistics time statement, as alternative to

QP Metrics, 262striped partitioning, 81Sybase Software Asset Management, see

SySAMSYBASE_LICENSE_FILE system

environment variable, 301, 302SySAM, 9, 298

administering, 310-311components of, 299-306environment options, 304-306implementing in ASE 15, 312-313license file, 302-303options file, 303pre-15 features, 298

472 | Index

Page 504: New.features.guide.to.Sybase.ase.15

properties file, 303-304reporting tool, see Samreportreports, 313-321server, 299server options, 304-306using with ASE 15, 299utility, see lmutil

syscolumns table, 236, 258syscomments table, 236, 258sysconstraints table, 237, 258sysindexes table, 237, 259sysobjects table, 237sysprocedures table, 237sysquerymetrics view, 263-264system catalog changes with tempdb, 361system maintenance improvements, 4, 13-19

Ttable,

adding partitions to, 126-130creating partitioned, 92-93partitioning, 53slicing, 53stateful, 383-384unpartitioning, 90, 125

tempdb, see also multiple temporarydatabasesaltering, 372and cursor sensitivity, 159-160determining available, 368-369determining need for, 366dropping, 371implementing, 367multiple, 10sample setup, 369-371strategies for using, 365-366using in ASE 15, 361-364using in pre-15, 360using with computed column index,

248using with function-based index,

256-257timeout limit, 172-174tran_dumptable_status() function, 50Transact-SQL, using Java classes as

datatypes in, 399-400triggers, using with QP Metrics, 270truncate table command, 43-44Try and Buy option, 309

UUnicode support, 47Unix environments, ASE preinstallation

process, 332-333update statistics command, 20-22

using in tempdb, 363-364upper bound, 59use cases, 11 see also business casesused_pages function, 142used_pgs function, 51user-defined sort order, 232-233

Vvertical parallelism, 175vertical partitioning, 57very large database, see VLDBvery large storage system, see VLSSviews, using with QP Metrics, 270virtual columns, 226VLDB, 54VLSS, 44-45

WWeb Services, 411-415Web Services Consumer, 413-414Web Services Producer, 411-413worktable materialization, with scrollable

sensitive cursors, 160-163

XXML,

ASE and, 11considerations for storing and

accessing, 406-407storing in the database, 402-406testing performance of, 407-410

XML documents,converting from SQL result set, 410storing into image datatype, 404-405storing into text datatype, 403storing outside the database, 405-406

XML in the database, 402-410

Index | 473

Page 505: New.features.guide.to.Sybase.ase.15

Visit us online at www.wordware.com for more information.

Use the following coupon code for online specials: sybase0047

Looking for more?

Check out Wordware’s market-leading Application and Game

Programming & Graphics Libraries featuring the following titles.

Embedded SystemsDesktop Integration1-55622-994-1 • $49.956 x 9 • 496 pp.

AutoCAD LT 2006The Definitive Guide1-55622-858-9 • $36.956 x 9 • 496 pp.

Learn FileMaker Pro 71-55622-098-7 • $36.956 x 9 • 544 pp.

Access 2003 Programming byExample with VBA, XML, and ASP1-55622-223-8 • $39.956 x 9 • 704 pp.

Advanced SQL Functions inOracle 10g

1-59822-021-7 • $36.956 x 9 • 416 pp.

SQL Anywhere Studio 9Developer’s Guide1-55622-506-7 • $49.956 x 9 • 488 pp.

Macromedia CaptivateThe Definitive Guide1-55622-422-2 • $29.956 x 9 • 368 pp.

Unlocking Microsoft C# v2.0Programming Secrets1-55622-097-9 • $24.956 x 9 • 400 pp.

32/64-Bit 80x86 AssemblyLanguage Architecture1-59822-002-0 • $49.956 x 9 • 568 pp.

Word 2003 Document Automationwith VBA, XML, XSLT, and SmartDocuments1-55622-086-3 • $36.956 x 9 • 464 pp.

Excel 2003 VBA Programming withXML and ASP1-55622-225-4 • $36.956 x 9 • 700 pp.

SQL for Microsoft Access1-55622-092-8 • $39.956 x 9 • 360 pp.

Game Design Theory & Practice(2nd Ed.)1-55622-912-7 • $49.956 x 9 • 728 pp.

Essential LightWave 3D [8]1-55622-082-0 • $44.956 x 9 • 624 pp.

Programming Game AI byExample1-55622-078-2 • $49.956 x 9 • 520 pp.

Polygonal Modeling: Basic andAdvanced Techniques1-59822-007-1 • $39.956 x 9 • 424 pp.