Sybase DBA Manual
Sybase DBA Manual
SYbase DBASybase DBA Manual
Sybase DBA11/10/2010
It is a humble attempt from our end to welcome the readers into
the intriguing and amazing world of SYBASE. Outmost care has been
taken to ensure that it is easily grasped even by the novices. This
document hopefully will become the first step before you launch
yourself into the profession of DATABASE ADMINISTRATION. Few of the
topics have been copied from www.sybase.com.
Contents
What is DBMS/RDBMS?4Duties of DBA4ASE Overview and Architecture
Diagram4What is Database?5How is data stored in database?7Disk
Initialization9New Page Allocation Procedure in
ASE10Segments11Thresholds12Roles & Groups12Logins &
User13Interface & Error Log File13ASE Memory
Usage14Db_options16Configuration Parameters16Indexes17Update Stats
& Sp_recompile20Locks & Isolation Level20Phases When Query
Is Executed & Process Status22Hit/Miss Diagram24Start and Shut
Down Of Server26Backup/Recovery/Refresh/Restore26Dbcc27MDA
Tables28Multiple Temp Databases29Utilities29Troubleshooting30How to
Apply EBF31Query and Server Performance
Tuning32Calculations33Sybase Diagram34Replication Overview and
Architecture Diagram34Crontab37
What is DBMS/RDBMS?A Data Base Management System is a collection
of programs that enable users to create and maintain a database. It
can also be said that DBMS is a process of managing data for
efficient retrieval and storage of data. Hence it is general
purpose software that facilitates the processes of defining,
constructing, manipulating and sharing databases among various
users. The RDBMS is a database management system that that stores
data in the form of tables and there is also a relationship that
exists between the tables. In a RDBMS data and information is
acquired by the relations or tables.Duties of DBAThe following are
some of the duties of a DBA- Checking the server status (automate
the job in crontab to monitor the server status). Ensure that
backups happen daily. Health check for all the databases.
Performance related tasks (automate Update statistics &
sp_recompile in crontab). Rebooting the servers during maintenance
window. Security Management (adding logins/users with proper
approvals from application/technical leads). Proactively monitoring
the database data & log growth (threshold setup). Monitoring
error logs.ASE Overview and Architecture Diagram
Adaptive Server Enterprise (ASE) has long been noted for its
reliability, low total cost of ownership and superior performance.
With its latest version, ASE 15, it has been dramatically enhanced
to deliver capabilities urgently needed by enterprises today. It
lays the long-term foundation for strategic agility and continuing
innovation in mission-critical environments. ASE 15 provides unique
security options and a host of other new features that boost
performance while reducing operational costs and risk. Find out how
you can exploit new technologies such as grids and clusters,
service-oriented architectures and real-time messaging.
ASE 15 meets the increasing demands of large databases and high
transaction volumes, while providing a cost effective database
management system. Its key features include on-disk encryption,
smart partitions and new, patent-pending query processing
technology that has demonstrated a significant increase in
performance, as well as enhanced support for unstructured data
management. ASE is a high-performance, mission-critical database
management system that gives Sybase customers an operational
advantage by lowering costs and risks.
System ProcessesNHSHDHASEUses the 2K pool for Roll forward /
RollbackUser DatabasesData ServerError logInterfaceConfiguration
FileMaster DB recoveredMaster Device Initialized.krgSystem Devices
and DatabasesUser DevicesFile System / Raw DevicesDB
OptionsSegments / ThresholdsPoolSHARED MEMORYMAX MEMORYASE
executablesPROCEDURE CACHEUSER LOG CACHEDATA CACHESTACK
SPACEDefault 2K pool Wash Marker can range from 20% to 80
%MRULRUPOOLWash AreaStatement Cache
Figure 1 Architecture Diagram
What is Database?
Database is collection of data objects (Tables, Views, Stored
Procedures, Functions, Triggers and indexes). Number of databases
that can be created under one adaptive server depends on the
configuration parameter number of databases All the information
regarding databases created in single adaptive server can be viewed
in system table SYSDATABASES and the database space usage in
SYSUSAGES. Databases are broadly divided into system and user
databases. System databases are default databases created during
adaptive server installation (master, model, tempdb &
sysbsystemprocs) few system databases are optional and can be
created/configured by DBA (sybsecurity, sybsyntax, DBCC and
sybsystemdb). Some of them are Sybsecurity, sybsystemdb, pubs,
sybsyntax, and dbcc. User databases can only be created by the
system administrator or whoever has the system administrator
privileges. A max of 256 Databases can be created on single
adaptive server.
Syntax Create database database_name [On {default |
database_device} [= size] [, database_device [= size]]...] [Log on
database_device [= size] [, database_device [= size]]...][With
{override | default_location = "pathname"}][For {load |
proxy_update}]
Options:With override: must be specified when data and log
segments are placed on a same Database device. For load: does not
initialize the allocated space which saves times when a dump will
be loaded next Alter database db_name on data_dev2= '100M' Drop
database db_name: drops the database which is not currently in use
and not contain any constraints referring to other databases. Dbcc
dbrepair (db_name, dropdb) Sp_helpdb: displays information about
specified database. When used without any db name displays
information about all databases. When db name is current database
then displays even segment information.
Sp_helpdb db_name, 'device_name' displays device fragments in
alphabetical order; default order in which device fragments are
added
sp_spaceused: Displays total space used by all the tables in
current databasesp_spaceused appl3
sp_renamedb olddb_name, new_dbname (db must be in single user
mode)
Master DB stores information regarding all other databases,
Logins, Devices, etc. It keeps track of all the databases. It has
nearly 32 system tables. Some of them are syslogins, sysdb,
sysdevices, sysroles, sysprocess, etc. Server wide details are
stored in here. Its the heart of the Server. Model DB is the
template for all the databases, excluding the Master. Temp DB can
be referred as a workspace for the users to perform operations. Its
a volatile memory, so whenever the server is rebooted, it has to be
recreated using the template from Model DB. Three kinds of table
are created in the Tempdb, Session level (table name prefixed with
#) - A session level temporary table exists till the expiry of the
session of the user Global level (table name prefixed with tempdb)
- A global level temporary tables exists till the server is
rebooted Workable tables- System creates this kind of tables like
for sorting purpose.
SYBSYSTEMPROCS DB stores all the system proceduresHow is data
stored in database? The data in database is stored in the form of
tables. The smallest unit of data storage is Page. 8 contiguous
pages is an Extent. The size of a page can be 2KB, 4KB, or 16KB.
Table size = 1 Extent * page size. Allocation unit: - Collection of
256 pages is called an Allocation Unit. Each Allocation unit has a
first page called Allocation Page, which stores all the information
of the pages of the Unit. Object Allocation Map stores about the
information of the pages of the Table. In return all the OAM point
to the AP of the entire allocation unit where object data is
stored. The number of entries per OAM page also depends on the
logical page size the server is using. The following table
describes the number of OAM entries for each logical page size:2K
logical page size4K logical page size8K logical page size16K
logical page size
25050610182042
Global Allocation Map (SYSGAMS system table) records and tracks
all the information of all the AUs in the particular database.
SYSGAMS is not accessible by any user. For every 8GB of disk space
a new GAM is created.
Figure 2 Overview on Allocation UnitLatch: - Latches are non
transactional synchronization mechanisms used to guarantee the
physical consistency of a page. While rows are being inserted,
updated or deleted, only one Adaptive Server process can have
access to the page at the same time. Latches are used for datapages
and datarows locking but not for all pages locking.Note: - The most
important distinction between a lock and a latch is the
duration
Table 1 Difference between Latch and LockLatchLock
A latch is held only for the time required to insert or move a
few bytes on a data page, to copy pointers, columns or rows, or to
acquire a latch on another index pageA lock can persist for a long
period of time: while a page is being scanned, while a disk read or
network write takes place, for the duration of a statement, or for
the duration of a transaction
Disk Initialization
Disk initialization is the process of allocating disk space to
server. During initialization adaptive server will divide the new
allocated disk into allocation units and an entry is made in
sysdevices system table. All the information regarding devices
connected to the Server, can be viewed in system table SYSDEVICES.
Device that can be connected to a server was up to 256 till version
12.5, later got unlimited from version 15. A disk that is allocated
to a server cannot be shared with other servers. Any number of
databases can use a disk as long as they are in the same server,
restricting to a single file system. Number of devices that can be
allocated to the adaptive server depends on the configuration
parameter number of devices Disk once initialized to the adaptive
server can only be dropped when all the associated databases are
dropped Disk default option for system database has to be turned
off.Syntax Disk init Name = "device_name" , Physname =
"physicalname" , [Vdevno = virtual_device_number,] Size =
number_of_blocks [, vstart = virtual_address , cntrltype =
controller_number] [, contiguous] [, dsync = {true | false}]ASE
15.0
Disk init name = 'dev2', Physname = '/data/sql_server.dev2',Size
= '100M', (vdevno is automatic in 15.0),Directio= trueDefault
(dsync = true) - disk writes are guranteed
Note: dsync/directio applied only for File System. Directio is
faster than dsync both cannot be used.Maximum devices pre 15.0 256
15.0 onwards 2 million
sp_helpdevice: shows device details sp_dropdevice device_name:
drops the device name and will not delete file in file system. Make
sure to drop the database before dropping any device. sp_deviceattr
device_name, 'dsync/directio, {true, false}New Page Allocation
Procedure in ASE
Page Header 3 2 1123Table row offsetGAMNEXTPreviousAU1AU2AU1
AU2AP OAMExtentAP OAMOAMThe object is extended to another AU
Figure 3 Page Allocation Procedure
Whenever a user inserts some data it first checks for available
pages in the current extent (OAM) and inserts into it. If not found
a new extent is allocated for that object in the same allocation
unit with the help of the allocation page. This extent is mapped to
the OAM of the object. If the extent is not available in the same
allocation unit it checks with GAM for a new allocation unit
(available extent), allocation page where a new extent can be
allocated. After the extent is allocated to the object in a
different allocation unit, this extent is mapped with the OAM of
the object present in the other allocation unit. If there are no
allocation units available for the current requested page in the
current GAM, a new GAM is created and later altogether the whole
process for new page is processed. If the data exceeds 8GB, new GAM
comes into picture. In this way GAM, AU, AP, OAM come into picture
when a new request for a page is requested.
Segments
Segment can be describes as the logical name that can be give to
a single/fraction/more devices. Two types of segments- System and
User System defined- System, Default & Log System- stores all
the data related to system tables in that particular database Log-
all data modifications in the database are temporarily stored in
log Default- stores the data related to user created data objects
User defined- A max of 32 segments can be created in a database
including the 3 system segments. SYSSEGMENTS, SYSUSAGES system
tables stores detail information regarding the segments, DB size,
etc. The Data and Log segments for a single database should not be
placed on a single device in order to improve the performance and
recovery is impossible. Before deleting the segments we should
ensure that the objects associated to that segment are dropped.
When we add the additional space to the database system and default
segments will automatically extends on new device where as user
created segments has to be manually extended. There are 3 ways in
which we can move tables form one segment to another. Using bcp-
Take a backup of the objects and then copy them back to new
segment. Using clustered index- By re-creating a clustered index
for the object on new segment. Using sp_placeobject It moves the
next upcoming records to the new segment.
Syntax Creates seg_name in the current database and device
namesp_addsegment seg_name, db_name, device_name: db_name matches
to current database
sp_placeobject segment_name, object_name, db_name: future
allocation will be mapped to new segment. Object name can be table
name, or index name, ('tab_name.index_name')
sp_helpsegment seg_name: Displays information about all the
segments in current database. If specified display information
about only one segment which is specified
Sp_dropsegment segment_name, db_name, [device_name]: drops the
segment in the current database
Sp_extendsegment, segname, db_name, device_name
Thresholds
Thresholds monitor the free space in a database and alert the
DBA to take appropriate action to prevent the max usage of the
database segments coz if neglected the server will hang and is
users cannot access. Thresholds can be defined on data and log
segments. Two types of thresholds : System & User System level-
Last chance Threshold. Usually 18% of log space is reserved for
LCT. LCT threshold limit value cannot be modified and is set by the
adaptive server automatically. We can only modify the stored
procedure Sp_thresholdaction. User level- Free chance Threshold.
FCT is defined by the user as per the usage of the database and log
segment size. Sp_thresholdaction sends alert if transaction crosses
the LCT. A max of 256 thresholds can be created for a Database. All
the details regarding the thresholds can be found in SYSTHRESHOLD.
FCTs can be dropped or modified.
Syntax Sp_addthreshold dbname, segname, free_space, proc_name:
To add a threshold.Sp_modifythreshold dbname, segname, free_space
[, new_proc_name] [, new_free_space] [, new_segname]: To modify a
given threshold.
Roles & Groups
Roles provide individual accountability for users performing
system administration and security-related tasks. Roles are granted
to individual server login accounts, and actions performed by these
users can be audited and attributed to them SYSTEMTABLE: sysroles
Groups provide a convenient way to grant and revoke permissions to
more than one user in a single statement. The sp_addgroup system
procedure adds a row to Sysusers in the current database. In other
words, each group in a database-as well as each user-has an entry
in Sysusers. By default ASE databases has default group as public
group. Below are a few descriptions of ROLES- System Administrator
(sa_role ) System Security Officer (SSO_role) Operator (oper_role )
Sybase technical support (sybase_ts_role) Replication
(replication_role) Distributed transaction manager (dtm_tm_role)
High availability (ha_role) Monitor and diagnosis (Mon_role) Job
Scheduler administration (js_admin_role) Real Time messaging
(messaging_role) Web Services (web_services) Job scheduler
user(js_user_role)
Syntax Create role role_name [with passwd "password"[, {passwd
expiration | min passwd length | max failed_logins} option_value]
]: To create a role.Sp_addgroup grpname: To create a group.
Logins & User
SYSLOGIN holds the details which allow people to access the
Server level. SYSUSERS holds the details which allow people to
access the Databases level. These two above table are related with
the column suid. Syslogins: Suid, dbname, name, password, srvname,
procid. Sysusers: Suid, uid, gid, name.
Syntax Sp_addlogin loginame, passwd [, defdb][, deflanguage][,
fullname][, passwdexp]: To create a login with a default
database.Sp_adduser loginame [, name_in_db [, grpname]]: To create
a user for a login for a database.
Interface & Error Log File
An interface file contains network information about all servers
on your network, including Adaptive Server, Backup Server, and XP
Server, plus any other server applications such as Monitor Server,
Replication Server, and any other Open Server applications. The
network information in the file includes the server name, network
name or address of the host machine, and the port, object, or
socket number (depending on the network protocol) on which the
server listens for queries. Dsedit or Dscp utility are used to
create an interface file. Using Dsedit or Dscp is preferred than
text editor as its easier to use and ensures that the interface
file is consistent in format.
Error log is stored externally. All the server level events and
severity level >16 will be recorded in the error log. We notice
error number, severity level >16 and error message in the
errorlog. These error messages can be found in the table
sysmessages. Severity level less than 17 are not recorded in the
error log, as the affect is minimal. Object level error messages
are not included in the error log. We cannot start the ASE without
errorlog file.
ASE Memory Usage
Memory is consumed by various configuration parameters,
statement cache, procedure cache and data caches. The total memory
allocated during boot-time is the sum of memory required for all
the configuration needs of Adaptive Server. The total memory
required value can be obtained from the read-only configuration
parameter 'total logical memory'. The configuration parameter 'max
memory' must be greater than or equal to 'total logical memory'.
'Max Memory' indicates the amount of memory you will allow for
Adaptive Server needs. During boot-time, by default, Adaptive
Server allocates memory based on the value of 'total logical
memory'. However, if the configuration parameter 'allocate max
shared memory' has been set, then the memory allocated will be
based on the value of 'max memory'. Caches in Max Memory / Adaptive
Server- Procedure Cache- Adaptive Server maintains an MRU/LRU (most
recently used/least recently used) chain of stored procedure query
plans. As users execute stored procedures, Adaptive Server looks in
the procedure cache for a query plan to use. If a query plan is
available, it is placed on the MRU end of the chain, and execution
begins. If more than one user uses a procedure or trigger
simultaneously, there will be multiple copies of it in cache. If
the procedure cache is too small, a user trying to execute stored
procedures or queries that fire triggers receives an error message
and must resubmit the query. Space becomes available when unused
plans age out of the cache, the default procedure cache size is
3271 memory pages. Statement Cache- The statement cache allows
Adaptive Server to store the text of ad hoc SQL statements.
Adaptive Server compares a newly received ad hoc SQL statement to
cached SQL statements and, if a match is found, uses the plan
cached from the initial execution. In this way, Adaptive Server
does not have to recompile SQL statements for which it already has
a plan. This allows the application to amortize the costs of query
compilation across several executions of the same statement. The
statement cache memory is taken from the procedure cache memory
pool. Data Cache- At the point of installation the default data
cache is of 2K memory pool. The data cache contains pages from
recently accessed objects, typically: sysobjects, sysindexes, and
other system tables for each database Active log pages for each
database The higher levels and parts of the lower levels of
frequently used indexes Recently accessed data pages The key points
for memory configuration are: The system administrator should
determine the size of shared memory available to Adaptive Server
and set 'max memory' to this value. The configuration parameter
'allocate max shared memory' can be turned on during boot-time and
run-time to allocate all the shared memory up to 'max memory' with
the least number of shared memory segments. Large number of shared
memory segments has the disadvantage of some performance
degradation on certain platforms. Please check your operating
system documentation to determine the optimal number of shared
memory segments. Note that once a shared memory segment is
allocated, it cannot be released until the next server reboot.
Configure the different configuration parameters, if the defaults
are not sufficient. Now the difference between 'max memory' and
'total logical memory' is additional memory available for
procedure, data caches or for other configuration parameters. The
amount of memory to be allocated by Adaptive Server during
boot-time is determined by either 'total logical memory' or 'max
memory'. If this value too high: Adaptive Server may not start, if
the physical resources on your machine does is not sufficient. If
it does start, the operating system page fault rates may rise
significantly and the operating system may need to re configured to
compensate.
Figure 4 How Adaptive Server uses memory For a good performance,
the Hit/Miss ratio should be over 90% and can be found by stored
procedure sp_sysmon.
Db_options To change the default settings for the database we
use database options. Sp_dboption- displays or changes database
options. List of database options are- Table 2 System Tables DB
OptionsSNo.DB OPTIONMASTERMODELTEMPDBSYBSYSTEMPROCS
1abort tran on log fullNoYesYesYes
2allow nulls by defaultNoYesYesYes
3async log serviceNoNoNoNo
4auto identityNoYesYesYes
5dbo use onlyNoYesYesYes
6ddl in tranNoYesYesYes
7delayed commitNoYesYesYes
8disable alias accessNoYesYesYes
9identity in nonunique indexNoYesYesYes
10no chkpt on recoveryNoYesYesYes
11no free space acctgNoYesYesYes
12read onlyNoYesYesYes
13single userNoYesNoYes
14select into/bulkcopy/pllsortNoYesYesYes
15trunc log on chkptNoYesYesYes
16unique auto_identity indexNoYesYesYes
Syntax- sp_dboption [dbname, optname, optvalue [, dockpt]]
Configuration Parameters Config parameter defines the server
wide settings and is divided in to static and dynamic. All the
static configured parameters values are stored in table
sysconfigure. All the dynamic configured parameters values are
stored in table syscurconfigure. Config values also stored in the
.cfg file and every time you modify the config values the current
.cfg file will be saved as .001 and new config values will be
appeared in .cfg file. We cannot start the ASE without valid config
file. Always have the backup for config file
Indexes Indexes are created for the faster retrieval of the
data. Indexes are preferred when the total requested records are
less than or equal to 5% of total table rows. Whenever a new record
is inserted into a table with no index, it stored into the last
available page called hot spot. Table without a clustered index is
known a Heap Table. When there is no index on the table the user
requested query will perform the table scans (scanning each page
that is allocated to the object). To avoid the table scan we prefer
indexes. Indexes can be broadly divided into two types Clustered
Indexes- There can only one clustered index for a table in binary
tree format. The leaf nodes contain the data itself. Data is stored
in physical order (asc/desc). Total 3 levels (root, intermediate
& data/leaf). Whenever we recreate the clustered index the
non-clustered index (if exists) will be automatically recreated and
update statistics will run on the object automatically (ASE will
run internally). Nonclustered Indexes- There can be as many as 249
non clustered indexes for a table. The leaf nodes contain pointers
that map to the actual data. This is the reason why it takes more
time for nonclustered index data. It is logical. Total 4 levels
(root, intermediate. Leaf & data). All the indexes can be found
in table sysindexes for a database. Table is identified by value 0;
Clustered index is identified by value 1, where as rest non
clustered indexes are identified from 2 to 249.
Diagrams that explain clearly about clustered and non clustered
index pages.
Figure 5 Non clustered index
Figure 6 Clustered Index
Table 3 Page Split, Overflow Pages, Row ForwardingPage
Split(APL)Overflow PagesRow Forwarding(DOL)
If there is not enough room on the data page for the new row, a
page split must be performed.A new data page is allocated on an
extent already in use by the table. If there is no free page
available, a new extent is allocated.The next and previous page
pointers on adjacent pages are changed to incorporate the new page
in the page chain. This requires reading those pages into memory
and locking them.Approximately half of the rows are moved to the
new page, with the new row inserted in order.The higher levels of
the clustered index change to point to the new page.If the table
also has nonclustered indexes, all pointers to the affected data
rows must be changed to point to the new page and row
locations.When you create a clustered index for a table that will
grow over time, you may want to use fill factor to leave room on
data pages and index pages. This reduces the number of page splits
for a time.Special overflow pages are created for nonunique
clustered indexes on all pages-locked tables when a newly inserted
row has the same key as the last row on a full data page. A new
data page is allocated and linked into the page chain, and the
newly inserted row is placed on the new page.The only rows that
will be placed on this overflow page are additional rows with the
same key value. In a nonunique clustered index with many duplicate
key values, there can be numerous overflow pages for the same
value.The clustered index does not contain pointers directly to
overflow pages. Instead, the next page pointers are used to follow
the chain of overflow pages until a value is found that does not
match the search value.A data-only-locked table is updated so that
it no longer fits on the page, a process called row forwarding
performs the following steps:The row is inserted onto a different
page, and A pointer to the row ID on the new page is stored in the
original location for the row.Indexes do not need to be modified
when rows are forwarded. All indexes still point to the original
row ID.
Update Stats & Sp_recompile The update statistics helps
Optimizer to prepare the best plan for the query based on the
density of the index key value in the sysstatistics table.
Execute/schedule update statistics on heavily modified user objects
on daily basis. SP_RECOMPILE causes each stored procedure and
trigger that uses the named table to be recompiled the next time it
runs. Syntax update statistics table_name [index_name]. Syntax
sp_recompile objname. Locks & Isolation Level Adaptive Server
protects the tables, data pages, or data rows currently used by
active transactions by locking them. Locking is a concurrency
control mechanism: it ensures the consistency of data within and
across transactions. Locking affects performance when one process
holds locks that prevent another process from accessing needed
data. The process that is blocked by the lock sleeps until the lock
is released. This is called lock contention. A deadlock occurs when
two user processes each have a lock on a separate page or table and
each wants to acquire a lock on the same page or table held by the
other process. The transaction with the least accumulated CPU time
is killed and all of its work is rolled back. Adaptive Server
supports locking at the table, page, and row level. Allpages
locking, which locks datapages and index pages ,It can acquire just
one table-level lock. Datapages locking, which locks only the data
pages, It can acquire a lock for each page that contained one of
the required rows. Datarows locking, which locks only the data
rows, It can acquire a lock on each row. Adaptive Server has two
levels of locking: For tables that use allpages locking or
datapages locking, either page locks or table locks. For tables
that use datarows locking, either row locks or table locks. Page
and row locks Shared locks-Adaptive Server applies shared locks for
read operations. If a shared lock has been applied to a data page
or data row or to an index page, other transactions can also
acquire a shared lock, even when the first transaction is active.
However, no transaction can acquire an exclusive lock on the page
or row until all shared locks on the page or row are released. This
means that many transactions can simultaneously read the page or
row, but no transaction can change data on the page or row while a
shared lock exists. Exclusive locks-Adaptive Server applies an
exclusive lock for a data modification operation. When a
transaction gets an exclusive lock, other transactions cannot
acquire a lock of any kind on the page or row until the exclusive
lock is released at the end of its transaction. The other
transactions wait or block until the exclusive lock is released.
Update locks-Adaptive Server applies an update lock during the
initial phase of an update, delete, or fetch (for cursors declared
for update) operation while the page or row is being read. Update
locks help avoid deadlocks and lock contention. If the page or row
needs to be changed, the update lock is promoted to an exclusive
lock as soon as no other shared locks exist on the page or row.
Table locks Intent lock-An intent lock indicates that page-level or
row-level locks are currently held on a table. Adaptive Server
applies an intent table lock with each shared or exclusive page or
row lock, so an intent lock can be either an exclusive lock or a
shared lock. Setting an intent lock prevents other transactions
from subsequently acquiring conflicting table-level locks on the
table that contains that locked page. An intent lock is held as
long as page or row locks are in effect for the transaction. Shared
lock-This lock is similar to a shared page or lock, except that it
affects the entire table. A create nonclustered index command also
acquires a shared table lock. Exclusive lock-This lock is similar
to an exclusive page or row lock, except it affects the entire
table. For example, Adaptive Server applies an exclusive table lock
during create clustered index command. Update and delete statements
require exclusive table locks if their search arguments do not
reference indexed columns of the object. Syslocks contains
information about active locks, and built dynamically when queried
by a user. No updates to Syslocks are allowed. Deadlock can be
tuned with two options Deadlock checking period specifies the
minimum amount of time (in milliseconds) before Adaptive Server
initiates a deadlock check for a process that is waiting on a lock
to be released. Deadlock retries specifies the number of times a
transaction can attempt to acquire a lock when deadlocking occurs
during an index page split or shrink. Spinlock ratio- A spinlock is
a simple locking mechanism that prevents a process from accessing
the system resource currently used by another process. All
processes trying to access the resource must wait (or spin) until
the lock is released. If 100 are specified for the spinlock ratio,
Adaptive Server allocates one spinlock for each 100 resources. The
number of spinlocks allocated by Adaptive Server depends on the
total number of resources as well as on the ratio specified. The
lower the value specified for the spinlock ratio, the higher the
number of spinlocks. Sp_lock reports information about processes
that currently hold locks. Lock promotion can only happen from row
to table and page to table level. Config parameters related to
locks are : Number of locks Lock scheme Lock wait period Lock
spinlock ratio page lock promotion HWM page lock promotion LWM page
lock promotion PCT row lock promotion HWM row lock promotion LWM
row lock promotion PCT The isolation level controls the degree to
which operations and data in one transaction are visible to
operations in other, concurrent transactions. There are four levels
of isolation which ASE has, of which level 0 is default. Level 0
also known as read uncommitted, allows a task to read uncommitted
changes to data in the database. This is also known as a dirty
read, since the task can display results that are later rolled
back. Level 1 - also known as read committed, prevents dirty reads.
Queries at level 1 can read only committed changes to data. At
isolation level 1, if a transaction needs to read a row that has
been modified by an incomplete transaction in another session, the
transaction waits until the first transaction completes (either
commits or rolls back) . Level 2 also known as repeatable read. It
prevents no repeatable reads. These occur when one transaction
reads a row and a second transaction modifies that row. If the
second transaction commits its change, subsequent reads by the
first transaction yield results that are different from the
original read. Level 3 also known as serializable reads. It
prevents phantoms. These occur when one transaction reads a set of
rows that satisfy a search condition, and then a second transaction
modifies the data (through an insert, delete, or update statement).
If the first transaction repeats the read with the same search
conditions.Phases When Query Is Executed & Process Status When
ever a query is executed it is executed in three phases parser,
compiler and execute. Parser checks for the syntactical errors,
compiler checks for the query plan in data cache ( if not found
gets backs from database), execute checks for the result in the
data cache ( if not found gets back from the database). During the
phases of execution, the single process can change to various
states depending on the availability of the i/o, query plan, data,
etc. Below are mentioned the possible states of a process.
Table 4 Process StatusStatusConditionEffects of kill Command
recv sleepwaiting on a network readImmediate
send sleepwaiting on a network sendImmediate
alarm sleepwaiting on an alarm, such as wait for delay
"10:00"Immediate
lock sleepwaiting on a lock acquisitionImmediate
SleepingWaiting disk I/O, or some other resource. Probably
indicates a process that is running, but doing extensive disk
I/Okilled when it "wakes up", usually immediate; a few sleeping
processes do not wake up, and require a Server reboot to clear
Runnablein the queue of runnable processesImmediate
Runningactively running on one on the Server
enginesImmediate
InfectedServer has detected serious error condition; extremely
rareKill command not recommended. Server reboot probably required
to clear process
Backgrounda process, such as a threshold procedure, run by SQL
Server rather than by a user processImmediate; use kill with
extreme care. Recommend a careful check of sysprocesses before
killing a background process
log suspendprocesses suspended by reaching the last-chance
threshold on the logkilled when it "wakes up": 1) when space is
freed in the log by a dump transaction command or 2) when an SA
uses the LCT_admin function to wake up "log suspend" processes
Hit/Miss Diagram
During the execution of a query, it goes through many phases,
which can result as a HIT or a MISS depending on the availability
of the data in the cache. The below diagrams clearly explain the
steps involved during HIT or MISS.
If Query Plan foundUserInterface fileASEASE E&OUnused
SpaceConfiguration fileData CacheProcedure CacheExecutes
QueryParserCompilerExecuteSession CreatedUser Log Cache in Shared
MemoryShared MemoryFigure 7 Steps when a Query is executed by a
USER --- HIT Max MemoryConnection EstablishedNetwork HandlerData
Fetched back to USERSend SleepHITReceive Sleep
Running
1. Connection is established between user and the ASE.2. A new
session is created for the user.3. When a query is fired, it is
passed to parser, next to complier and then executed till the
result is fetched back to the user.4. Parser checks for syntactical
errors, compiler checks for existing query plan in procedure cache,
execute checks for the corresponding data in the data cache. 5. If
everything is found where expected, its called a HIT.
Figure 8 Steps when a Query is executed by a USER ---
MISSSYSSTATISTICSUPDATE STATSSYSQUERYPLANSTABLEDISKOptimizer
prepares Query PlanUserInterface fileASEASE E&OUnused
SpaceConfiguration fileData CacheProcedure CacheExecutes
QueryParserCompilerExecuteSession CreatedUser Log Cache in Shared
MemoryShared Memory Max MemoryConnection EstablishedNetwork
HandlerData Fetched back to USERSend Sleep MISSReceive Sleep
Sleeping If query plan not found
Running
1. Connection is established between user and the ASE, 2. A new
session is created for the user.3. When a query is fired, it is
passed to parser, next to complier and then executed till the
result is fetched to the user.4. Parser checks for syntactical
errors, compiler checks for query plan in the procedure cache, if
not found it gets prepared by the optimizer, execute checks for
data in data cache and if not found gets from the disk and sent
back to user.5. If any one of them (data or query plan) is not
found where expected, its called a MISS.
Start and Shut Down Of Server To start the ASE execute
startserver f RUN_. Use m to start the server in single user mode.
The following examples show the RUN_servername file edited to start
an Adaptive Server named TEST in single-user mode On UNIX-
#!/bin/sh # # Adaptive Server Information: # name: TEST # master
device: /work/master.dat # master device size: 10752 # errorlog:
/usr/u/sybase/install/errorlog # interfaces:
/usr/u/sybase/interfaces # /usr/u/sybase/bin/dataserver
-d/work/master.dat -sTEST -e/usr/u/sybase/install/errorlog
-i/usr/u/sybase/interfaces -c/usr/u/sybase/TEST.cfg m Use P option
in run server file to generate the password for the SA which prints
the new SA password in the errorlog, for this we need to reboot the
ASE server. Once the configuration file is loaded, it creates
memory in the shared memory and creates the file .krg. Solaris\Unix
or any environment, allocates shared memory for the ASE server. In
version 12.5.3 a max of 3.7 GB can be allocated to the ASE server.
IPC does the job of controlling and monitoring the shared segments
of shared memory. Shut down of a server can be in two modes. .krg
file will be automatically deleted when ASE server goes offline.
Wait- its a clean shut down which checks and ensures that all
transactions are closed. Nowait- Its a dont care shut down, which
kill all the transactions
forcefully.Backup/Recovery/Refresh/Restore Recovery- Getting the
database to the current state of data from a previously maintained
backup. Refresh- Loading data from one database to another
irrespective of sever. Restore- Taking the database to a previous
state. Backup- Taking an extra copy of the existing data.
Steps to be followed for test refresh. Take backup for Prod and
Test DB, and also db_options for the test should be saved before
the refresh operation. BCP out the tables SYSUSERS, SYSPROTECTS and
SYSALIASES from Test DB. Bcp test.dbo.sysusers out -Ulogin Sserver
Ddatabase. Bcp test.dbo.sysprotects out -Ulogin Sserver Ddatabase.
Bcp test.dbo.sysaliases out -Ulogin Sserver Ddatabase. For copying
the ddl of a table the command use ddlgen Ulogin Ppassword
-S[server] T[object_type] (if user requested to take the backup for
specific tables). Load the Test DB with Prod DB backup. To modify
the system table configure the parameters- sp_configure allow
updates on system tables, 1. Delete the rows from sysusers
excluding dbo user-Delete from sysusers where name not in (dbo).
For copying back the data to sysusers, sysprotects table we need to
execute- Bcp test.dbo.sysusers in -Ulogin Sserver Ddatabase. Bcp
test.dbo.sysprotects in -Ulogin Sserver Ddatabase. To reconfigure
the access parameters to system tables - sp_configure allow updates
on system tables, 0. Online database database name. Set the
db_options for the database. Remap the users after load/refresh
Run dbcc and update stats on need basis.Dbcc Database
consistency checker (dbcc) checks the logical and physical
consistency of a database and provides statistics, planning, and
repair functionality. dbcc Tablealloc checks the specified user
table to ensure that- All pages are correctly allocated. Partition
statistics on the allocation pages are correct. No page is
allocated that is not used. All pages are correctly allocated to
the partitions in the specified table and no page is used until
allocated. No page is used that is not allocated. dbcc Checkalloc
ensures that- All pages are correctly allocated Partition
statistics on the allocation pages are correct No page is allocated
that is not used All pages are correctly allocated to individual
partitions and no page used until allocated. No page is used that
is not allocated dbcc Checkalloc [(database_name [, fix | nofix] )]
dbcc Tablealloc checks the specified user table to ensure that- All
pages are correctly allocated. Partition statistics on the
allocation pages are correct. No page is allocated that is not
used. All pages are correctly allocated to the partitions in the
specified table and no page is used until allocated. No page is
used that is not allocated.
dbcc indexalloc checks the specified index to see that- All
pages are correctly allocated. No page is allocated that is not
used. No page is used that is not allocated. dbcc CheckTable checks
the specified table to see that- Index and data pages are linked
correctly. Indexes are sorted properly. Pointers are consistent.
All indexes and data partitions are correctly linked. Data rows on
each page have entries in the row-offset table; these entries match
the locations for the data rows on the page. Partition statistics
for partitioned tables are correct. Dbcc checkdb runs the same
checks as dbcc CheckTable on each table in the specified database.
If you do not give a database name, dbcc checkdb checks the current
database. dbcc checkdb gives similar messages to those returned by
dbcc CheckTable and makes the same types of corrections. DBCCDB
database setup Determine Size sp_plan_dbccdb. Initialize Disk
Devices- Based on size create data and log devices. Create dbccdb
Database- Create dbccdb on the above created devices. Install
Stored Procedures- isql -Usa -P -SASESERVER iinstalldbccdb
-odbccdb_error.out. Configure Adaptive Server- sp_configure "number
of worker processes", 2. Create Workspaces. Set dbccdb
Configuration Parameters. Run dbcc checkstorage. Evaluate
Configurations.PROXY TABLES Sp_addserver PROX_,NULL,
Sp_addexternlogin PROX_, , , Sp_addobjectdef proxy_, PROX_,,,,table
Create proxy_table , proxy_ at PROX_,,,MDA Tables MDA tables
provides detailed information about server status, the activity of
each process in the server, the utilization of resources such as
data caches, locks and the procedure cache, and the resource impact
of each query thats run on the server. Steps that need to be
followed during installing MDA tables- Check for sp_configure
enable cis and set to 1. Add loopback server name alias in master -
sp_addserver loopback, null, @@servername Install MDA tables - isql
-U sa -P -S i ~/scripts/installmontables Assign 'Mon_role' to
logins allowed MDA access- grant role Mon_role to sa To test for
basic configuration select * from master..monstate Assign several
configuration parameters like enable monitoring to 1, sql text pipe
active to 1, sql text pipe max messages to 100, plan text pipe
active to 1, plan text pipe max messages to 100, statement pipe
active to 1, statement pipe max messages to 100, errorlog pipe
active to 1, errorlog pipe max messages to 100, deadlock pipe
active to 1, deadlock pipe max messages to 100, wait event timing
to 1, process wait events to 1, object lockwait timing to 1, sql
batch capture to 1, statement statistics active to 1, per object
statistics active to 1, max sql text monitored to 2048Multiple Temp
Databases Multiple tempdb is useful when a user or an application
needs specified tempdb, for its operations. If in any case if the
rest tempdb are full, the default tempdb would be helpful in
getting back the rest to normal operations. The default tempdb is
usually assigned to the SA, to avoid server tempdb full. The number
of user created tempdb can be configured according to the available
hard ware resources and the user application. Steps to be followed
during creation of tempdb- Creating devices for tempdb - Create
temporary database on =Log on =. To add the database as a tempdb-
Sp_tempdb add, , default. To bind the tempdb - sp_tempdb bind, lg,
, db. . To unbind the tempdb - sp_tempdb unbind, lg, , db.
.Utilities Optdiag -Displays optimizer statistics or loads updated
statistics into system tables.The advantages of Optdiag are-
Optdiag can display statistics for all tables in a database, or for
a single table. Optdiag output contains addition information useful
for understanding query costs, such as index height and the average
row length. Optdiag is frequently used for other tuning tasks, so
you should have these reports on hand. Isql - Interactive SQL
parser to Adaptive Server.Syntax isql Uuser Sserver Ddatabase.When
connecting to the server with isql, it goes to the file (sql.ini
for Windows/interface.ini for Solaris), and finds the path for the
server. ddlgen- This is used to take a back up of a table structure
Defncopy- This is used to take a backup of defaults, views, rules
stored procedures and triggers. Bcp two options in and out. Out is
to extract the data from the e object to flat file and In is vice
versa. Again In have two options, fast bcp (non-logged) and slow
bcp (logged). For copying back the data the option must be set to
true- sp_dboption select into /pllsort, true.Troubleshooting If
server config value has reached the max threshold limit for number
of open databases, number of open objects, number of open indexes,
number of user connections & number of locks. Please follow the
steps below. Sp_countmetadata- Gives information about the total
number of objects like tables, sps, views, triggers, etc.
sp_countmetadata "configname" [, dbname]. Sp_monitorconfig- Gives
information about max usage/current max value of the above
mentioned config parameters. Sp_configure- To reconfigure the
parameters with new value. To check if the port is opened Telnet if
the port is opened you will see a blank screen in command
prompt.... To abort all the open transactions when log is full-
Select LCT_admin(abort, pid, dbid).Process to kill the open
transactionsa. The PID for the current running transaction can be
identified from table SYSLOGSHOLD.b. Once the PID is identified,
the user details can be found from the table SYSPROCESSES.c.
Execute dbcc TRACEON (3604): Turning on the trace flag will display
the trace output on console rather than the error log. d. Execute
dbcc SQL TEXT (spid) : to view the details of the transactione.
Execute dbcc TRACEOFF (3604)f. To Kill Process- kill PID. To view
server status in Unix- showserver, & ps-esf |grep Sybase. To
know server version details- select @@version OR records in error
log OR dataserver v. To know the current isolation level used by
the server- select @@isolation. For manually clearing shared
segments- ipcrm s/m and verify the values with .krg file and once
you clear the memory then delete the .krg file. Also check for any
active Sybase process ps eaf |grep Sybase. Kill the active Sybase
process related to the particular server. Memory Jam- It occurs due
to the non de-allocation of the shared segments in the shared
memory. To recover this we have to check out the number of servers
running. Later we have to match the .krg file with the details
available on the shared segment allocated to the server. Delete the
identified segment by using- ipcrm s. Its always a better practice
to hold a backup of the .krg file, as it is deleted once the server
gets shut down.Checking the server blocking processa. When a
process A is blocked by another process B, and B blocked by C, and
continued in similar fashion till N number of processes, these all
are blocked by the nth process. To unblock process A, all the sub
processes till Nth process should be killed or terminated.b. To
retrieve the blocked and corresponding blocking process ids, a
correlated query on sysprocess table, has to be executed.c. Check
the final process id dbcc dbcc traceon(3604)godbcc (sqltext )dbcc
dbcc traceoff(3604)go Or dbcc set tracefile for spidgoset
show_sqltext onset showplan ongosp_helpapptracego dbcc set
tracefile off for spid go d. If its a select operation, kill the
process. Or else wait till the operation is completed.Recovering
Master Databasea. Recovering master Database can be done only if a
valid dump of Master database exists.b. To build it again, steps
followed arei. Dataserver d -z -b.ii. User should log into the
server, in single user mode.iii. Load the Master DB from previous
backup.iv. Restart the server.c. Backups of Necessary tables like
syslogins, sysusages, sysdevices, sysdatabases, and sysalternatives
should always be maintained.
Point-In-Time Recoverya. PITR can be done only if a valid full
dump of database, transaction log dumps and the dump of current
transaction log (dump tran to with no_truncate, this option will
allow to perform the backup if database device fails.).b. To do
PITR, steps followed arei. Restore the full backupii. Restore all
the transaction logs in the sequenceiii. Restore the most recent
transaction log which was dumped with no_truncate optioniv. Bring
the database online.Extending Temp Database to separate data and
log segmentsa. Initialize data and log devices separatelyb. Extend
Temp database to these devicesc. Sp_configure allow updates, 1
Allows updating the sysusages table.d. Delete from sysusages where
dbid=2 and segmap=e. Sp_configure allow updates, 0f. Sp_helpdb
tempdb- Shows the details for the Temp database.Log free space
issue (showing minus value in log segment usage)log on to Sybase
server
Use masterGoSp_stop_rep_agent go
Logon to replication server
admin healthgosuspend connection to goadmin whogoadmin
healthgoadmin disk_spacego
log on to Sybase server
use mastergosp_dboption , dbo use,truegosp_dboption , single
user,truegoselect spid from sysprocesses where dbid=db_id()go(if
find any active process kill them)use gocheckpointgodbcc
traceon(3604)godbcc dbrepair(,fixlogfreespace)godbcc
traceon(3604)gouse mastergosp_dboption , dbo use,falsegosp_dboption
, single user,falsego
Logon to replication server
admin healthgoresume connection to goadmin whogoadmin
healthgoadmin disk_spacego
log on to Sybase server
use mastergosp_start_rep_agent go
TIPS To find the ROWID select rownum = identity(10) , into #
from select * from #drop #go
How to Apply EBF Sybase releases a bulletin and lists the
details of the updates and bug fixes (EBF details) for each version
of Sybase server. All of these patches / hot fixes are bundled in a
package and released with the list of bugs fixed or enhancements of
the product Process for Patch Deployment:a. Download the patches
from the Sybase Site. b. Create a temporary directory, and unzip
the patch file into this directory. For DOS installations set the
temporary directory as your current directory and run setup.exe in
that directory. For Windows installations, double click on
setup.exe from the explorer in the temporary directory. c. The
patch ZIP file contains a file called README.TXT. This is a text
file documenting bug fixes and changes that were made to this
release of the software.d. Evaluate the Patches and verify is the
patch is suitable for your current version.e. Classify them into
required and not required. Any patch that is presumed not to be
necessary for deployment now can be ignored as the final version of
this hot fix will be rolled out in the next service pack of the
product.f. Run basic DBCC checks to make sure the all the databases
are in good condition. If you find any error in DBCC output fix the
issue before rolling out the new patch. g. Take a backup of all the
system, user databases and BCP out of specific system tables. h.
Lock all the users on the data server as we need to perform some
post install steps i. Bounce the Serverj. Deploy the required
patches in the test environment.k. Run standard upgrade scripts
like installmaster, installmsgs, installmontables from
$SYBASE/ASE_125/scriptsl. Verify the output for any issues.m.
Change the $SYBASE variable in .profile to point to new EBF
directory and source itn. Copy the run server file into the new
location ( if required since most of the clients keep the run
server file in $SYBASE/ASE/install )o. Bring up the Server.p.
Verify the errorlog for any issues.q. Validate the errorlog and fix
the issues with applications.r. unlock all the users and handover
the server to apps teams. If any of the hot fix is causing
undesired results do not deploy the fix until further testing.t.
Once the test environment looks stable with the new patch
deployments and all issues seen during the deployment are resolved
the same can be moved to production.u. Document all the steps,
observations and workarounds that have been carried out during the
testing phase.v. Prepare a Checklist.w. Plan for roll out into
production.Query and Server Performance Tuning Below are some tools
for Query Tuninga. Show Plan- tells you the final decisions that
the optimizer makes about your queries.b. Set Show Plan to turn the
show plan off or onc. Set Statistics I/O displays the number of
logical and physical reads and writes required for each table in a
query. If resource limits are enabled, it also displays the total
actual I/O cost.d. Dbcc trace on (3604, 302) - makes you understand
why and how the optimizer makes choices. It can help you debug
queries and decide whether to use certain options, like specifying
an index or a join order for a particular query. e. Dbcc trace on
(3604, 310) - gives per table I/O estimates.f. Dbcc trace on (3604,
317) - gives report about all the plans.g. Set no exec on- prepares
the query plan with out executing the query.
Below are some points to be taken care for better performancea.
Data and Log of tempdb should be placed on different devices.b.
Data and log of user databases should also be placed on different
devices.c. Data cache should be configured such that hit/miss ratio
should be greater than 90.d. Statistics must be updated
periodically, and also exec sp_recompile.e. Run update index
statisticsf. All Nonclustered Indexes should be placed on a
separate segment.g. All lookup tables should be placed on a
separate segment.h. Indexes should be proper. Reorg/Recreate if
necessary.i. Run reorg forward_row/reorg rebuild (select reorg
rebuild +name+char(10)+go from sysobjects where type=U and
lockscheme(name) not it (allpages)
Sp_monitor - Displays statistics about Adaptive Server. Adaptive
Server keeps track of how much work it has done in a series of
global variables. Sp_monitor displays the current values of these
global variables and how much they have changed since the last time
the procedure executed. Sp_sysmon - Displays performance
information, displays information about Adaptive Server
performance. It sets internal counters to 0, and then waits for the
specified interval while activity on the server causes the counters
to be incremented. When the interval ends, sp_sysmon prints
information from the values in the counters. If you face the
performance issues.Check the physical_io in sysprocesses table and
trace the spid detailsDbcc traceon(3604)Dbcc
sqltext(spid)Sp_whoSp_lockSp_object_stats 00:10:00Sp_monitorconfig
all
Calculations To find vdev number- Select max (low/16777218) from
Master..sysdevices. Procedure cache size= (max number of concurrent
users)*(4+size of largest plan)*1.25. Minimum procedure cache size
needed=(number of main procedures)*(Average plan size). To get a
rough estimate of the size of a single stored procedure, view, or
trigger, use: select(count(*) / 8) +1 from sysprocedures where id =
object_id("procedure_name"). number of worker processes = [max
parallel degree] X [the number of concurrent connections wanting to
run queries in parallel] X [1.5]. Size of the tempdb = 20% of sum
up of all the user databases. Size of Log disk= 10 % of the data
disk of the particular device. Size of DBCC database- can be found
out from sp_plan_dbccdb. Stored procedure to get the segment usage
report
SQL to calculate database usageselect "Database," =
convert(char(20),db_name(dbid))+',',"Data Size," = str(sum(size *
abs(sign(segmap - 4))) / 512.0, 7, 2)+',',"Data Used," =
str(sum((size - curunreservedpgs(dbid, lstart, unreservedpgs)) *
abs(sign(segmap - 4))) / 512.0, 7, 2)+',',"Data Free," = str(100.0
* sum((curunreservedpgs(dbid, lstart,unreservedpgs)) *
abs(sign(segmap - 4))) / sum(size * abs(sign(segmap- 4))), 3) +
"%"+',',"Log Size," = str(sum(size * (1 - abs(sign(segmap - 4)))) /
512.0, 7, 2)+',',"Log Used," = str(sum((size -
curunreservedpgs(dbid, lstart, unreservedpgs))* (1 -
abs(sign(segmap - 4)))) / 512.0, 7, 2)+',',"Log Free" = str(100.0 *
sum((curunreservedpgs(dbid, lstart,unreservedpgs))* (1 -
abs(sign(segmap - 4)))) / sum(size * (1 - abs(sign(segmap - 4)))),
3) + "%"from master..sysusageswhere segmap < 5group by
db_name(dbid) SQL to find the database and the related
devicesselect dbid, size,name, phyname "physical device"from
sysusages, sysdevices where name = 'xxx' and vstart between low and
high compute sum(size)go SQL to find the max CPU utilizedSelect
spid,suser_name(suid),hostname,program_name,physical_io,memusage,ipaddr
from sysprocesses order by physical_iogoSelect
name,accdate,totcpu,totio from syslogins order by totcpugo Entry in
interface file - master tli tcp /dev/tcp \x0002 0401 81 96
c451.This can be interpreted as- X0002 no user interpretation
(header info?) 0401 port number (1025 decimal) 81 first part of IP
address (129 decimal) 96 second part of IP address (150 decimal) c4
third part of IP address (196 decimal) 51 fourth part of IP address
(81 decimal)
Sybase Diagram
Replication Overview and Architecture Diagram Sybase Replication
Agent is the Sybase solution for replicating table data changing
operations and stored procedure invocations against a primary
database. Sybase Replication Agent extends the capabilities of
Replication Server by allowing non-Sybase (heterogeneous) database
servers to act as primary data servers in a replication system
based on Sybase replication technology. Rep Server is configured by
using command rs_init. Primary Dataserver - It is the source of
data where client applications enter/delete and modify data.This
need not be ASE; it can be Microsoft SQL Server, Oracle, DB2, and
Informix. Replication Agent/Log Transfer Manager- Log Transfer
Manager (LTM) is a separate program/process which reads transaction
log from the source server and transfers them to the replication
server for further processing. With ASE 11.5, this has become part
of ASE and is now called the Replication Agent. However, you still
need to use an LTM for non-ASE sources. When replication is active,
one connection per each replicated database in the source
dataserver (sp_who). Replication Server (s) - The replication
server is an Open Server/Open Client application. The server part
receives transactions being sent by either the source ASE or the
source LTM. The client part sends these transactions to the target
server which could be another replication server or the final
dataserver. Replicate (target) Dataserver It is the server in which
the final replication server (in the queue) will repeat the
transaction done on the primary. One connection for each target
database, in the target dataserver when the replication server is
actively transferring data (when idle, the replication server
disconnects or fades out in replication terminology). Stable Queue-
after Replication Server is installed; a disk partition is set up
used by Replication Server to establish stable queues. During
replication operations, Replication Server temporarily stores
updates in these queues. There are three different types of stable
queues, each of which stores different type of data. Inbound Queue-
holds messages only from a Replication Agent. If the database you
add contains primary data, or if request stored procedures are to
be executed in the database for asynchronous delivery, Replication
Server creates an inbound queue and prepares to accept messages
from a Replication Agent for the database. Outbound Queue- holds
messages for a replicate database or a replicate Replication
Server. There is one outbound queue for each of these destinations:
For each replicate database managed by a Replication Server, there
is a Data Server Interface (DSI) outbound queue. For every
Replication Server to which a Replication Server has a route, there
is a Replication Server Interface (RSI) outbound queue.
Subscription Materialization Queue- holds messages related to newly
created or dropped subscriptions. This queue stores a valid
transactional snapshot from the primary database during
subscription materialization or from a replicate database during
dematerialization. Stable Queue Manager manages all these
operations related to Stable Device. Data Server Interface- It
connects RDS and Rep Server. It reads data from Outbound Queue and
replicates to RDS according to the subscriptions. Distributor- It
takes care of sending committed transaction from inbound queue to
outbound queue with the help of Stable Queue Transaction (SQT)
Reader. Any exceptions encountered during the process are recorded
into rs_exception. Whenever two servers required to be connected
over WAN, they both have to be set into a single domain. If Rep
Servers are more than 10, Rep monitoring service (RMS) is required
to manage all of them. Otherwise they are managed by Replication
Manager. There can be multiple rep definitions for a single table.
Multiple servers can be connected in two ways- Hierarchal and Star.
The time to reach from PDS to RS is known as latency.
Figure 9 Replication ArchitectureLOG TRANSFER
LANGUAGEREPLICATION SERVERREP AGENT LOGDATA SERVER INTERFACESTABLE
DEVICEStable QueueManaged by Stable Queue
ManagerSubscriptionsAMNAMNMBMREPLICATE DATA SERVERPRIMARY DATA
SERVERRep DefinitionTransaction logSecondary Transaction Point
found in rs_loacterCommitted TransactionsDISTRIBUTORStable Queue
Transaction ReaderINBOUND QUEUEOUTBOUND QUEUE
Sybase License DefinitionLicense DefinitionsCrontab Job
scheduling at UNIX level is done in crontab. All the DBA jobs are
scheduled to run automatically in crontab. Each user on UNIX level
has own crontab and one should have the privilege to add/modify the
crontab. Syntax- crontab [ -e [opens the crontab editor] | -l
[lists all the crontab entries] | -r [removes the crontab file].
Always have the backup for crontab.Page 32
#!/bin/sh# Author : SYBASE DBA# Created : 13-10-2010# Name :
dba_remap_users.sh# Purpose : This script ensures that users in a
database are linked to a login# in the sybase server that have the
same name. Users that do not have# a corresponding login name will
be linked to suid 0 # # Description :# Modified by :# Modified date
:# Run from crontab
as:################################################################################
# Variables.
DBDIR=/dba/upgrade/scriptsLOGDIR=/dba/upgrade/logs
# Script to extract the database names.
echo "use mastergoset nocount ongoselect name from
sysdatabasesgo" > $DBDIR/DB_NAMES_SIZE.sql
# Extracting the database names.
isql -U -S -P -i$DBDIR/DB_NAMES_SIZE.sql -P$PASSWORD
>$LOGDIR/DB_NAMES
# Turn on the allow updates to system tables to TRUE
isql -U$T_USER -S$T_SERVER -P$T_PASSWORD