Top Banner
TRANSPORT TABLESPACES Data movement has never been a pleasant experience especially when the database is very big and the time is less. In this short article , I am explaining 8 simple steps for transport tablespace from one database to another. Summary :- Step 1:- Check for self-contained tablespace(s). Step 2:- Set the Tablespaces in read only mode. Step 3:- Grab the information about data files belongs to desired tablespaces. Step 4:- Export the tablespaces with options TRANSPORT_TABLESPACES=Y TABLESPACE = <TABLESPACE_NAME> TRIGGERS=N Step 5:- Physical copy the tablespace datafiles from source to target location. Note that the data file names can be changed in this process. Step 6 :- Physical copy the export file, created in step 4 , from source to target system Step 7:- Import the tablespace metadata into the target database ,using the names the data files were give in target system Step 8:- The tablespace can be returned to read write mode on either or both nodes Detail :- Checklist before taking a decision for transporting a tablespace from source to target system Answer it ? Decision Is transportable tablespace is self- contained ? Yes Can transport Is transportable tablespace contains Nested tables, varrays, Bitmap indexes ? No Can transport Are you willing to move part of data ? Yes Can not transport
150

Oracle Important Notes

Apr 10, 2015

Download

Documents

api-3819698
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Oracle Important Notes

TRANSPORT TABLESPACES

  Data movement has never been a pleasant experience especially when the database is very big and  the time is less. In this short article , I am explaining 8  simple steps for transport tablespace from one database to another.

 Summary :-  Step 1:- Check for self-contained tablespace(s).Step 2:- Set the Tablespaces in read only mode.Step 3:- Grab the information about data files belongs to desired tablespaces.Step 4:- Export the tablespaces with options TRANSPORT_TABLESPACES=Y  TABLESPACE = <TABLESPACE_NAME>  TRIGGERS=NStep 5:- Physical copy the tablespace datafiles from source to target location. Note that the data file names can be changed in this process.Step 6 :- Physical copy the export file, created in step 4 , from source to target systemStep 7:- Import the tablespace metadata into the target database ,using the names the data files were give in target systemStep 8:- The tablespace can be returned to read write mode on either or both nodes  Detail :-

  

 Checklist before taking a decision for transporting a tablespace from source to target system  

 Answer it ?

 Decision

 Is transportable tablespace is self-contained ? 

 Yes  

 Can transport 

Is transportable  tablespace contains Nested tables, varrays, Bitmap indexes ? 

No  

Can transport 

Are you willing to move part of data ? 

Yes Can not transport

Is source and target database have         Same operation system (OS)       Same Block size       Same Character set  

  Yes

 Can transport

This checklist is valid up-to Oracle 8.1.7 .  Some of the restriction like same oracle blocksize  are not in  Oracle 9i. In 9i if you are transporting a tablespace from different version of oracle databases ,

Page 2: Oracle Important Notes

you have to set db cache_size. (Refer oracle doc)     Step 1:- Check for self-contained tablespace(s). For checking of self-contained tablespaces we will use package DBMS_TTS.TRANSPORT_SET_CHECK package. This package is created by dbmsplts.sql which  ran by catproc.sql and  populate TRANSPORT_SET_VIOLATIONS table.  Svrmgrl>EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK(TS_LIST=>'<TABLESPACE1>,<TABLESPACE2',incl_constraints=><FALSE/TRUE>); As you see this package mainly consist of two input parameters               1. 1.     Transportable tablespace name2. 2.      Whether you want to include the constraints or not.  For the sake of simplicity let us consider a scenario  where we need to transport two tablespaces  STablespace1 and STablespace2 from source to destination database across a network. Assume that both the database are on windows NT , Oracle database version is 8.1.6.3.0 , same Database block size and same charcter set. Connect with sys and execute the package as follows :- SQL> execute dbms_tts.transport_Set_check(TS_LIST=>('STablespace1,STablespace2'),incl_constraints=>TRUE); PL/SQL procedure successfully completed. If the tablespace is not self-contained then the above  package will popluate transport_set_violation table. Query this table as follows to check violation. SQL> select * from transport_set_violations; no rows selected If it returns "no rows selected" means tablespaces are ready for transportation. Otherwise  part of the tablespace objects are linked with other tablespace . You have to make these tablespaces as self contained first.  Step 2:- Set the Tablespaces in read only mode. SQL> alter tablespace SourceTablespace1 read only; Tablespace altered.

Page 3: Oracle Important Notes

 SQL> alter tablespace SourceTablespace2 read only; Tablespace altered.  Step 3:- Grab the information about data files belongs to desired tablespaces. SQL> desc dba_data_files; Name                                      Null?    Type ----------------------------------------- -------- ----------------- FILE_NAME                                          VARCHAR2(513) FILE_ID                                            NUMBER TABLESPACE_NAME                                    VARCHAR2(30) BYTES                                              NUMBER BLOCKS                                             NUMBER STATUS                                             VARCHAR2(9) RELATIVE_FNO                                       NUMBER AUTOEXTENSIBLE                                     VARCHAR2(3) MAXBYTES                                           NUMBER MAXBLOCKS                                          NUMBER INCREMENT_BY                                       NUMBER USER_BYTES                                         NUMBER USER_BLOCKS                                        NUMBER SQL> select tablespace_name,file_name from dba_Data_files  2  where tablespace_name in ('SourceTablespace1','SourceTablespace2'); TABLESPACE_NAME      FILE_NAME-------------------- ---------------------------------------------------------------------SourceTablespace1    F:\OR8I\ORADATA\SourceTablespace11ORSV.DBF SourceTablespace2    F:\OR8I\ORADATA\INDX01ORSV.DBF  Step 4:- Export the tablespaces with options TRANSPORT_TABLESPACES=Y   TABLESPACE = <TABLESPACE_NAME>   TRIGGERS=N C:\>set ORACLE_SID=SAMDB C:\>exp 'sys/change_on_install as SYSDBA' Tablespaces=('SourceTablespace1','SourceTablespace2') TRANSPORT_TABLESPACE=Y CONSTRAINTS=N  file=expdat.dmp  Export: Release 8.1.6.3.0 - Production on Thu May 3 11:20:50 2001 (c) Copyright 1999 Oracle Corporation.  All rights reserved.  Connected to: Oracle8i Enterprise Edition Release 8.1.6.3.0 - ProductionWith the Partitioning optionJServer Release 8.1.6.3.0 - Production

Page 4: Oracle Important Notes

Export done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character setNote: table data (rows) will not be exportedAbout to export transportable tablespace metadata...For tablespace SourceTablespace1 .... exporting cluster definitions. exporting table definitions. . exporting table                SAMTEST1. . exporting table                SAMTEST2. . exporting table                SAMTEST3. . exporting table                SAMTEST4. . exporting table                SAMTEST5. . exporting table                SAMTEST6. . exporting table                SAMTEST7. . exporting table                SAMTEST8. . exporting table                SAMTEST9For tablespace SourceTablespace2 .... exporting cluster definitions. exporting table definitions. exporting referential integrity constraints. exporting triggers. end transportable tablespace metadata exportExport terminated successfully without warnings.       Step 5:- Physical copy the tablespace datafiles from source to target location. Note that the datafile names can be changed in this process. c:\>copy  F:\OR8I\ORADATA\SourceTablespace11ORSV.DBF    ......  <target system>   c:\>copy  F:\OR8I\ORADATA\INDX01ORSV.DBF      ..........  <target system>   Step 6:- Physical copy the export file, created in step 4 , from source to destination location. c:\> copy expdat.dmp  ...........  <target system>   On unix , you can ftp the files with binary . Step 7:- Import the tablespace metadata into the target database ,using the names the datafiles were give in target system C:\>imp sys/mantra file=c:\temp\expdat.dmp transport_tablespace=Y datafiles=('E:\ORA816\DATABASE\SourceTablespace11ORSV.DBF','E:\ORA816\DATABASE\INDX01ORSV.DBF')

Page 5: Oracle Important Notes

 Import: Release 8.1.6.3.0 - Production on Thu May 3 12:23:03 2001 (c) Copyright 1999 Oracle Corporation.  All rights reserved.  Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - ProductionWith the Partitioning optionJServer Release 8.1.6.3.0 - Production Export file created by EXPORT:V08.01.06 via conventional pathAbout to import transportable tablespace(s) metadata...import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set. importing SYS's objects into SYS  importing tables ...  :  :  :About to enable constraints.... importing SourceTablespace1's objects into SourceTablespace1Import terminated successfully without warnings. Step 8:- The tablespace can be returned to read write mode on either or both nodes Sql> alter tablespace SourceTablespace1 read write;Sql> alter tablespace SourceTablespace2 read write;

the below query wil be used to find out which user allocated more space in temporary tablespace

select   se.username        ,se.sid        ,su.extents        ,su.blocks * to_number(rtrim(p.value)) as Space        ,tablespace        ,segtypefrom     v$sort_usage su        ,v$parameter  p        ,v$session    sewhere    p.name          = 'db_block_size'and      su.session_addr = se.saddrorder by se.username, se.sid

the above query wil be used to find out which user allocated more space in temporary tablespace

the sript is very userful for oracle dba

FIND BLOCK CORRUPTION

V$database_block_corruption is use to find out block corruption

Page 6: Oracle Important Notes

Below query is used to find out chin rows (only oracle 10g)b select * from table (dbms_space.asa_recommendations());The large number of columns will make it difficult to see the output clearly. So, here is just one record shown in vertical format. TABLESPACE_NAME       : USERSSEGMENT_OWNER         : ARUPSEGMENT_NAME          : ACCOUNTSSEGMENT_TYPE          : TABLE PARTITIONPARTITION_NAME        : P7ALLOCATED_SPACE       : 0USED_SPACE            : 0RECLAIMABLE_SPACE     : 0CHAIN_ROWEXCESS       : 17RECOMMENDATIONS       : The object has chained rows that can be removedby re-org.C1                    :C2                    :C3                    :TASK_ID               : 261MESG_ID               : 0====================================================

My query was fine last week and now it is slow. Why?

The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.

Some factors that can cause a plan to change are:

Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)

Has OPTIMIZER_MODE been changed in INIT.ORA? Has the DEGREE of parallelism been defined/changed on any table? Have the tables been re-analyzed? Were the tables analyzed using estimate

or compute? If estimate, what percentage was used? Have the statistics changed? Has the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been

changed? Has the INIT.ORA parameter SORT_AREA_SIZE been changed? Have any other INIT.ORA parameters been changed?

What do you think the plan should be? Run the query with hints to see if this produces the required performance.

====================================================

Page 7: Oracle Important Notes

If you want to get some idea of which users spend most time and consume most resources on the system, you don’t necessarily have to do anything subtle and devious to find out what’s been happening. There has been a (simple) audit trail built into the database for as long as I can remember. (The 8.1 – 10.1 in the banner simply covers the fact that I’ve only recently checked the following comments against those versions)

The init.ora file contains an audit_trail parameter. This can take the values true, false, none, os, db (the true/false options are for backwards compatibility). If you set the value to db (or true), then you have enabled auditing in the database. Once you have restarted the database (the parameter is not modifiable online), you can decide what events you want to audit.

For a quick cheap audit of connections, connect as a suitably privileged account (which typically means as a DBA), and issue the command:

    audit create session;

If you need to turn this audit off, the corresponding command is:

    noaudit create session;

The older syntax for the same level of audit is:

    audit connect;    noaudit connect;

With this level of audit turned on, every session that logs on (except the SYS sessions) will insert a row into the table sys.aud$ giving various details of who they are and what time they connected. When the session ends, its last action is to update this row with various session-related details, such as log-off time, and the amount of work done. To make the results more readable, Oracle has superimposed the view dba_audit_session on top of the aud$ table; the 9.2 version of this view is as follows:

 Name                    Null?    Type ----------------------- -------- ---------------- OS_USERNAME                      VARCHAR2(255)       Who USERNAME                         VARCHAR2(30) USERHOST                         VARCHAR2(128)       Where TERMINAL                         VARCHAR2(255) TIMESTAMP               NOT NULL DATE                logon date/time ACTION_NAME                      VARCHAR2(27)         LOGOFF_TIME                      DATE                log off date/time LOGOFF_LREAD                     NUMBER              v$sess_io.consistent_gets LOGOFF_PREAD                     NUMBER              v$sess_io.physical_reads LOGOFF_LWRITE                    NUMBER              v$sess_io.block_changes LOGOFF_DLOCK                     VARCHAR2(40)        Number of deadlocks  SESSIONID               NOT NULL NUMBER RETURNCODE              NOT NULL NUMBER CLIENT_ID                        VARCHAR2(64)

Page 8: Oracle Important Notes

 SESSION_CPU                      NUMBER              Session statistic. CPU used by this session

=========================================================================

   how to relocate temporary tablespace

1.shu immediate

2.startup mount

3.copy (os level) excisting location to new location in temp file

4.alter database rename file ‘excisting location ‘ to ‘new location’

5.alter database openb

Page 9: Oracle Important Notes

How to enable logminer

Log MinerLog Miner enables the analysis of the contents of archived redo logs. It can be used to provide a historical view of the database without the need for point-in-time recovery. It can also be used to undo operations, allowing repair of logical corruption.

Create Dictionary File Adding Logs Starting LogMiner Querying Log Information Stopping LogMiner

Create Dictionary FileWithout a dictionary file LogMiner displays all tables and columns using their internal object IDs and all values as hex data. The dictionary file is used to translate this data into a more meaningful format. For the dictionary file to be created the following initialization parameter must be set and the instance must be mounted or open.UTL_FILE_DIR=C:\Oracle\Oradata\TSH1\ArchiveThe dictionary file is created using the BUILD procedure in the DBMS_LOGMNR_D package.BEGIN  DBMS_LOGMNR_D.build (    dictionary_filename => 'TSH1dict.ora',    dictionary_location => 'C:\Oracle\Oradata\TSH1\Archive');END;/Adding LogsA list of logs to be analyzed must be added to log miner using the DBMS_LOGMNR package. The first log in the list is added using the NEW procedure, while subsequent logs are added using the ADD_LOGFILE procedure.BEGIN  DBMS_LOGMNR.add_logfile (    options     => DBMS_LOGMNR.new,    logfilename => 'C:\Oracle\Oradata\TSH1\Archive\TSH1\T001S00006.ARC');

  DBMS_LOGMNR.add_logfile (    options     => DBMS_LOGMNR.addfile,    logfilename => 'C:\Oracle\Oradata\TSH1\Archive\TSH1\T001S00007.ARC');END;/Starting LogMinerAt this point LogMiner can be started using the overloaded START_LOGMNR procedure. The analysis range can be narrowed using time or SCN.BEGIN  -- Start using all logs  DBMS_LOGMNR.start_logmnr (    dictfilename => 'C:\Oracle\Oradata\TSH1\Archive\TSH1dict.ora');

  -- Specify time range  DBMS_LOGMNR.start_logmnr (    dictfilename => 'C:\Oracle\Oradata\TSH1\Archive\TSH1\dict.ora',    starttime    => TO_DATE('01-JAN-2001 00:00:00', 'DD-MON-YYYY HH:MI:SS'),

Page 10: Oracle Important Notes

    endtime      => TO_DATE('01-JAN-2001 10:00:00', 'DD-MON-YYYY HH:MI:SS'));

  -- Specify SCN range  DBMS_LOGMNR.start_logmnr (    dictfilename => 'C:\Oracle\Oradata\TSH1\Archive\TSH1\dict.ora',    startscn     => 100,    endscn       => 150);END;/Querying Log InformationOnce LogMiner is started, the contents of the log files can be queried using the following views:

V$LOGMNR_DICTIONARY - The dictionary file in use. V$LOGMNR_PARAMETERS - Current parameter settings for LogMiner. V$LOGMNR_LOGS - Which redo log files are being analyzed. V$LOGMNR_CONTENTS - The contents of the redo log files being analyzed.

The following query displays the SQL issued along with the undo SQL to reverse it.SELECT scn, operation, sql_redo, sql_undoFROM   v$logmnr_contents;The following query displays the number of hits for each object during the analyzed period.SELECT seg_owner, seg_name, count(*) AS HitsFROM   v$logmnr_contentsWHERE  seg_name NOT LIKE '%$'GROUP BY seg_owner, seg_name;Stopping LogMinerOnce the analysis is complete, logminer should be stopped using the END_LOGMNR procedure.BEGIN  DBMS_LOGMNR.end_logmnrEND;/

FIND ORACLE USER PRIVILEGES

SQL> select privilege from dba_sys_privs where grantee='CONNECT';

PRIVILEGE----------------------------------------CREATE VIEWCREATE TABLEALTER SESSIONCREATE CLUSTERCREATE SESSIONCREATE SYNONYMCREATE SEQUENCECREATE DATABASE LINK

8 rows selected.

Page 11: Oracle Important Notes

SQL>

ORACLE 10G

SQL> select privilege from dba_sys_privs where grantee='CONNECT';

PRIVILEGE----------------------------------------CREATE SESSION

SQL>

Exchange partition :

 It is use to move table data for non partition table to partition table

Step 1: create the one table with out partition

SQL> CREATE TABLE my_table (  id           NUMBER,  description  VARCHAR2(50));

INSERT INTO my_table (id, description) VALUES (1, 'One');INSERT INTO my_table (id, description) VALUES (2, 'Two');INSERT INTO my_table (id, description) VALUES (3, 'Three');INSERT INTO my_table (id, description) VALUES (4, 'Four');  2    3    4  Table created.

SQL> SQL> 1 row created.

SQL> 1 row created.

SQL> 1 row created.

SQL> 1 row created.

Commit;

Step 2: create another one table with partition options

SQL> CREATE TABLE my_table_2 (  id           NUMBER,  description  VARCHAR2(50))PARTITION BY RANGE (id)

Page 12: Oracle Important Notes

(PARTITION my_table_part VALUES LESS THAN (MAXVALUE));  2    3    4    5    6 

Table created.

Step 3 : use this command

SQL> ALTER TABLE my_table_2  EXCHANGE PARTITION my_table_part  WITH TABLE my_table  WITHOUT VALIDATION  2    3    4    5  /

Step 4: the above step all my_table data will be move to partition my_table_2 table

SQL> select * from my_table;

no rows selected

SQL> select * from my_table_2 partition(my_table_part);

        ID DESCRIPTION---------- --------------------------------------------------         1 One         2 Two         3 Three         4 Four

The above ouput first u insert the record my_table table only but now all the table data is moved to my_table_2 partiton table

Optimize Oracle UNDO Parameters

Overview

Starting in Oracle9i, rollback segments are re-named undo logs. Traditionally transaction undo information was stored in Rollback Segments until a commit or rollback statement was issued, at which point it was made available for overlaying.

Best of all, automatic undo management allows the DBA to specify how long undo information should be retained after commit, preventing "snapshot too old" errors on long running queries.

This is done by setting the UNDO_RETENTION parameter.  The default is 900 seconds (5 minutes), and you can set this parameter to guarantee that Oracle keeps undo logs for extended periods of time.

Page 13: Oracle Important Notes

Rather than having to define and manage rollback segments, you can simply define an Undo tablespace and let Oracle take care of the rest. Turning on automatic undo management is easy.  All you need to do is create an undo tablespace and set UNDO_MANAGEMENT = AUTO.

However it is worth to tune the following important parameters

1. The size of the UNDO tablespace 2. The UNDO_RETENTION parameter

Calculate UNDO_RETENTION  for given UNDO Tabespace

You can choose to allocate a specific size for the UNDO tablespace and then set the UNDO_RETENTION parameter to an optimal value according to the UNDO size and the database activity. If your disk space is limited and you do not want to allocate more space than necessary to the UNDO tablespace, this is the way to proceed. The following query will help you to optimize the UNDO_RETENTION parameter:

Optimal Undo Retention= Actual Undo Size

-----------------------------------------------

DB_BLOCK_SIZE * UNDO_BLOCK_PER_SEC

Because these following queries use the V$UNDOSTAT statistics, run the queries only after the database has been running with UNDO for a significant and representative time!

Actual Undo Size

SELECT SUM(a.bytes) "UNDO_SIZE"  FROM v$datafile a,       v$tablespace b,       dba_tablespaces c WHERE c.contents = 'UNDO'   AND c.status = 'ONLINE'   AND b.name = c.tablespace_name   AND a.ts# = b.ts#;

 UNDO_SIZE----------  209715200

Undo Blocks per Second

SELECT MAX(undoblks/((end_time-begin_time)*3600*24))      "UNDO_BLOCK_PER_SEC"  FROM v$undostat;

UNDO_BLOCK_PER_SEC------------------        3.12166667

Page 14: Oracle Important Notes

DB Block Size

SELECT TO_NUMBER(value) "DB_BLOCK_SIZE [KByte]" FROM v$parameterWHERE name = 'db_block_size';

DB_BLOCK_SIZE [Byte]--------------------                4096

Optimal Undo Retention

209'715'200 / (3.12166667 * 4'096) = 16'401 [Sec]

Using Inline Views, you can do all in one query!

SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",       SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",       ROUND((d.undo_size / (to_number(f.value) *       g.undo_block_per_sec))) "OPTIMAL UNDO RETENTION [Sec]"  FROM (       SELECT SUM(a.bytes) undo_size          FROM v$datafile a,               v$tablespace b,               dba_tablespaces c         WHERE c.contents = 'UNDO'           AND c.status = 'ONLINE'           AND b.name = c.tablespace_name           AND a.ts# = b.ts#       ) d,       v$parameter e,       v$parameter f,       (       SELECT MAX(undoblks/((end_time-begin_time)*3600*24))              undo_block_per_sec         FROM v$undostat       ) gWHERE e.name = 'undo_retention'  AND f.name = 'db_block_size'/

ACTUAL UNDO SIZE [MByte]------------------------200

UNDO RETENTION [Sec]--------------------10800

OPTIMAL UNDO RETENTION [Sec]----------------------------16401

Page 15: Oracle Important Notes

Calculate Needed UNDO Size for given Database Activity

If you are not limited by disk space, then it would be better to choose the UNDO_RETENTION time that is best for you (for FLASHBACK, etc.). Allocate the appropriate size to the UNDO tablespace according to the database activity:

UNDO SIZE=UNDO RETENTION * DB_BLOCK_SIZE * UNDO_BLOCK_PER_SEC

Again, all in one query:

SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",       SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",       (TO_NUMBER(e.value) * TO_NUMBER(f.value) *       g.undo_block_per_sec) / (1024*1024)       "NEEDED UNDO SIZE [MByte]"  FROM (       SELECT SUM(a.bytes) undo_size         FROM v$datafile a,              v$tablespace b,              dba_tablespaces c        WHERE c.contents = 'UNDO'          AND c.status = 'ONLINE'          AND b.name = c.tablespace_name          AND a.ts# = b.ts#       ) d,      v$parameter e,       v$parameter f,       (       SELECT MAX(undoblks/((end_time-begin_time)*3600*24))         undo_block_per_sec         FROM v$undostat       ) g WHERE e.name = 'undo_retention'  AND f.name = 'db_block_size'/ACTUAL UNDO SIZE [MByte]------------------------200UNDO RETENTION [Sec] --------------------10800NEEDED UNDO SIZE [MByte]------------------------131.695313The previous query may return a "NEEDED UNDO SIZE" that is less than the "ACTUAL UNDO SIZE". If this is the case, you may be wasting space. You can choose to resize your UNDO tablespace to a lesser value or increase your UNDO_RETENTION parameter to use the additional space.

Page 16: Oracle Important Notes

what are the advantages of ASM?

- Disk Addition—Adding a disk becomes very easy. No downtime is required and file extents are redistributed automatically.

- I/O Distribution—I/O is spread over all the available disks automatically, without manual intervention, reducing chances of a hot spot.

- Stripe Width—Striping can be fine grained as in Redo Log Files (128K for faster transfer rate) and coarse for datafiles (1MB for transfer of a large number of blocks at one time).

- Buffering—The ASM filesystem is not buffered, making it direct I/O capable by design.Kernelized Asynch I/O—There is no special setup necessary to enable kernelized asynchronous I/O, without using raw or third-party filesystems such as Veritas Quick I/O.

- Mirroring—Software mirroring can be set up easily, if hardware mirroring is not available.

StripingASM stripes files across all the disks in a disk group. The stripe granularity can be either coarse or fine.Coarse Striping - 1MB chunks of space. Not used for controlfiles, online redo logs or flashback logs.Fine Striping - 128K chunks of space. Can be used for controlfiles, online redo logs or flashback logs.

Aligning (db_file_multiblock_read_count x db_block_size) to 1MB and using coarse striping would yield a fairly optimal performance on most small-medium size installations.

MirroringThe Oracle-specific nature of ASM means that Oracle can mirror segment extents on seperate disks in a disk group. There will be a primary extent on one disk and a mirrored extent on a different disk in the same disk group. Failure groups allow you to group disks so that primary and mirrored extents reside in separate failure groups. This would be necessary to mitigate the loss of a disk controller or IO channel. There are 3 levels of redundancy:

External Redundancy No mirroring. Rely on the O/S or an external logical volume manager. Failure groups are not allowed

Normal Redundancy2-way mirroring. Requires at least 2 failure groups

High Redundancy 3-way mirroring. Requires at least 3 failure groups

Page 17: Oracle Important Notes

library error in oracle

SQL> connect sys as sysdbaEnter password:/usr/lib/hpux64/dld.so: Unable to find library 'libodm9.so'.ERROR:ORA-12547: TNS:lost contact

The above error occurs when libodm9.so file is corrupted or misplaced in the $ORACLE_HOME/lib path

SOLUTION: copy the libodm9.so from the backup or any existing $ORACLE_HOME/lib/libodm9.so to its original path

dropping the user error

SQL> drop user GSSORACLE cascade;drop user GSSORACLE cascade*ERROR at line 1:ORA-00604: error occurred at recursive SQL level 1ORA-24005: must use DBMS_AQADM.DROP_QUEUE_TABLE to drop queue tables

SOLUTION :

Execute the below query as DBA

SQL> show user;USER is "SYS"SQL> alter session set events '10851 trace name context forever ,level 2';

To set the root password never expires in solaris

In Solaris to set the root or any user password to never expires execute the below command

bash-2.05# passwd -x -1 rootpasswd: password information changed for rootbash-2.05# pwconv

To check the solaris / linux os - 32 bit or 64 bit

Is my Operating System 64-bit?

In Solaris, from the command line (you don’t have to be root in most cases) run this command:

/usr/bin/isainfo -kv

Page 18: Oracle Important Notes

If your OS is 64-bit, you will see output like:

64-bit sparcv9 kernel modules

If your OS is 32-bit, you will get this output:

32-bit sparc kernel modules

For Linux users

If you are running Linux, you can check your distribution with the uname command:

uname -m

The output will read x86_64 for 64-bit and i686 or similar for 32-bit.

How about this Oracle install? Is it 64-bit?

The question here is weather your Oracle binaries are 64-bit. While some of the binaries associated with Oracle may be 32-bit, the important ones will be 64 bit. To check those, follow these steps from the command line:

cd $ORACLE_HOME/binfile oracl*

This will display the file type of your oracle binaries. If you are running 64-bit binaries, the output should look like this:

oracle: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not strippedoracleO: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not stripped

If your binaries are 32-bit, the output will look like this:

oracle: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not stripped

If you find you are running 32-bit and decide to go to 64 be careful. The switch can be a bit tricky. Read the documentation closely and make sure your service contract is payed up!

Oracle 10G installation in LINUXLINUX - Installing oracle 10G

INCLUDE THIS LINE IN /etc/sysctl.conf

# Oracle Parameterskernel.shmmax = 536870912kernel.shmmni = 4096kernel.shmal = 32000kernel.sem = 500 32000 100 128

Page 19: Oracle Important Notes

fs.file-max = 65536

NEED TO SEE THE ADDITIONAL PARAMETERS:-

[root@stgpxy2 core]# pwd/proc/sys/net/core[root@stgpxy2 core]# cat rmem_default 110592[root@stgpxy2 core]# echo "262144" > /proc/sys/net/core/rmem_default[root@stgpxy2 core]# cat rmem_default 262144[root@stgpxy2 core]# pwd/proc/sys/net/core[root@stgpxy2 core]# echo "262144" > /proc/sys/net/core/rmem_max[root@stgpxy2 core]# echo "262144" > /proc/sys/net/core/wmem_default[root@stgpxy2 core]# echo "262144" > /proc/sys/net/core/wmem_max

DATA GUARD ( 9i / 10G ) :- Implementing a Physical Standby DatabaseMay 22nd, 2006 at 9:40 am by Yuri van Buren

Following our cookbook we installed CRS and RAC software on OCFS2 on our linux5 server.

See our PoC MAA- Blogs 5,6 and 7 about OCFS2, CRS and RAC install respectively.

Overview of the file system on linux5:

[oracle@linux5 ~]$ df -k

Filesystem           1K-blocks      Used Available Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

                      67829336  11887112  52496672  19% /

/dev/cciss/c0d0p1       102182      6774     95408   7% /boot/efi

none                   1020720         0   1020720   0% /dev/shm

/dev/sdj                522240    105408    416832  21% /u01/cd1

/dev/sdk                522240    102336    419904  20% /u02/cd2

/dev/sdl                522240     86976    435264  17% /u03/cd3

/dev/sdm              20971200   3783872  17187328  19% /u01/sw1

/dev/sdn              20971200    137984  20833216   1% /u09/sw2

/dev/sdo              26213888  11774080  14439808  45% /u02/oradata

Page 20: Oracle Important Notes

/dev/sdp              26213888   2737664  23476224  11% /u03/oraflra

We followed the steps outlined in Chapter 3 of the Oracle Data Guard Concepts and Administration Guide to create our physical standby database.

<!–More–>

 Step 1 Configure a Standby redo log

Ensure log file sizes are identical on the primary and standby databases:

The primary database has 8 redolog groups of 50Mb each (All 4 nodes have two groups each).

Because we use a backup to setup the Standby database the redologfiles of the Standby database are defacto the same size.

 Determine the appropriate number of standby redo log file groups:

As best practices you have to add:

(maximum number of logfiles for each thread + 1 ) * maximum number of threads = (2+1) * 4 = 12

 Verify related database parameters and settings:

Check MAXLOGFILES and MAXLOGMEMBERS settings for 10.1. databases.

Use: alter database backup controlfile to trace; + check the trace in the user_dump_dest directory. From 10.2 onwards the controlfile automatically expands for the MAX settings parameters. Be sure that 8 + 12 = 20 redolog files can be created.

If that is not the case re-create the controlfile on the primary database for 10.1. databases only.

Create standby redo log file groups:

Add 12 standby redolog groups on the primary database all of the same size with:

alter database add standby logfile group 9 ('/u02/oradata/prac/stredo09.log') size 50M;

alter database add standby logfile group 10 ('/u02/oradata/prac/stredo10.log') size 50M;

alter database add standby logfile group 11 ('/u02/oradata/prac/stredo11.log') size 50M;

Page 21: Oracle Important Notes

alter database add standby logfile group 12 ('/u02/oradata/prac/stredo12.log') size 50M;

alter database add standby logfile group 13 ('/u02/oradata/prac/stredo13.log') size 50M;

alter database add standby logfile group 14 ('/u02/oradata/prac/stredo14.log') size 50M;

alter database add standby logfile group 15 ('/u02/oradata/prac/stredo15.log') size 50M;

alter database add standby logfile group 16 ('/u02/oradata/prac/stredo16.log') size 50M;

alter database add standby logfile group 17 ('/u02/oradata/prac/stredo17.log') size 50M;

alter database add standby logfile group 18 ('/u02/oradata/prac/stredo18.log') size 50M;

alter database add standby logfile group 19 ('/u02/oradata/prac/stredo19.log') size 50M;

alter database add standby logfile group 20 ('/u02/oradata/prac/stredo20.log') size 50M;

 Verify that the standby redo log file groups were created with:

select group#, thread#, sequence#, archived, status from v$standby_log;

 

    GROUP#    THREAD#  SEQUENCE# ARC STATUS

———- ———- ———- — ———-

         9          0          0 YES UNASSIGNED

        10          0          0 YES UNASSIGNED

        11          0          0 YES UNASSIGNED

        12          0          0 YES UNASSIGNED

        13          0          0 YES UNASSIGNED

        14          0          0 YES UNASSIGNED

        15          0          0 YES UNASSIGNED

Page 22: Oracle Important Notes

        16          0          0 YES UNASSIGNED

        17          0          0 YES UNASSIGNED

        18          0          0 YES UNASSIGNED

        19          0          0 YES UNASSIGNED

        20          0          0 YES UNASSIGNED

 

12 rows selected.

Step 2 Set the Primary Database Initialization Parameters

Use the following alter system commands to change the necessary parameter settings:

 

alter system set log_archive_config='DG_CONFIG=(prac,stdb)' scope=both;

 alter system set log_archive_dest_1='LOCATION=/u02/oradata/prac_arch VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=prac' scope=both;

 alter system set log_archive_dest_2='SERVICE=stdb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=stdb'  scope=both;

 alter system set log_archive_max_processes=30 scope=both;

 alter system set fal_server = 'stdb' scope=both;

 alter system set fal_client = 'prac' scope=both;

 alter system set db_file_name_convert='stdb','prac' scope=spfile;

 And the most important one: specify online redolog location on Primary and Standby location.                

alter system set log_file_name_convert = '/u02/oradata/prac/', '/u02/oradata/prac/' scope=spfile;

 Note that the scope is only spfile, so you have to restart the instance to make the parameter active.

 alter system set standby_file_management=AUTO scope=both;

Step 3 Check if ARCHIVELOG is enabled

Page 23: Oracle Important Notes

Select log_mode from v$database;  Must be: ARCHIVELOG.

 If the database is in NOARCHIVELOG enable it by doing:

shutdown immediate;

startup mount;

alter database archivelog;

alter database open;

Step 4 Create a backup copy of the Primary Database datafiles

Use RMAN or even more simple use a COLD BACKUP.

 shutdown immediate;

 Copy all database files to the standby site

(This might take a while … depending on size of you’re database and network speed).

Step 5 Create a Control File and Parameter File for the Standby Database

startup mount;

alter database create standby controlfile as '/u02/oradata/prac/stdb.ctl';

create pfile='/u02/oradata/prac/initstdb.ora' from spfile;

alter database open;

 Modify this initstdb.ora file with “vi” according to the following:

This line is OK!! *.db_name='prac'

ADD line ==> *.db_unique_name='stdb'

 Flip names in these parameters to:

*.db_file_name_convert='prac','stdb'

*.log_archive_dest_1='LOCATION=/u02/oradata/prac_arch VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=stdb'

*.log_archive_dest_2='SERVICE=stby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=prac'

*.fal_client='stdb'

Page 24: Oracle Important Notes

*.fal_server='prac'

Remove both local and remote listener lines.

Save the file and copy it with the standby controlfile to the standby database server.

 Now we have on the standby database server in the /u02/oradata/prac directory the Cold Backup, the Standby Control File and the initstdb.ora file.

Step 6 Setup the environment to support the Standby database

Create a password file for the “prac” standby database on the standby server:

orapwd file=orapwprac.ora entries=10

Configure the Oracle Net Services names on the primary and standby server:

Add the following two tnsnames.ora entries on the primary server and standby server.

 prac =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = linux1_vip)(PORT = 1522))

    (ADDRESS = (PROTOCOL = TCP)(HOST = linux2_vip)(PORT = 1522))

    (ADDRESS = (PROTOCOL = TCP)(HOST = linux3_vip)(PORT = 1522))

    (ADDRESS = (PROTOCOL = TCP)(HOST = linux4_vip)(PORT = 1522))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = prac)     )   )

 stdb =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = linux5_vip)(PORT = 1521))

    )

Page 25: Oracle Important Notes

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = prac)     )   )

 Check if the listeners are configured on both the primary and standby database servers.

Restart the listeners on the primary servers with:

[oracle@linux1 prac]$ . oraenv

ORACLE_SID = [prac1] ? crs

[oracle@linux1 prac]$ srvctl stop listener -n linux1

[oracle@linux1 prac]$ srvctl start listener -n linux1

[oracle@linux1 prac]$ srvctl stop listener -n linux2

[oracle@linux1 prac]$ srvctl stop listener -n linux3

[oracle@linux1 prac]$ srvctl stop listener -n linux4

[oracle@linux1 prac]$ srvctl start listener -n linux2

[oracle@linux1 prac]$ srvctl start listener -n linux3

[oracle@linux1 prac]$ srvctl start listener -n linux4

 For a non RAC database use the familiar listener commands:

lsnrctl stop

lsnrctl start

Specify the correct listener name if needed, check the listener.ora file for the name.

Step 7 Start the Physical Standby Database

Copy the standby control file to the controlfile names mentioned in the initstdb.ora file:

cp stdb.ctl /u02/oradata/prac/control01.ctl

cp stdb.ctl /u02/oradata/prac/control02.ctl

cp stdb.ctl /u02/oradata/prac/control03.ctl

Page 26: Oracle Important Notes

 Set the correct environment variables and do the following:

 [oracle@linux5 prac]$ . oraenv

ORACLE_SID = [prac] ? prac

[oracle@linux5 prac]$ export ORACLE_SID=prac1

[oracle@linux5 prac]$ sqlplus /nolog

 SQL*Plus: Release 10.2.0.1.0 - Production on Tue Feb 28 11:19:33 2006

 Copyright (c) 1982, 2005, Oracle.  All rights reserved.

 SQL> conn / as sysdba

Connected to an idle instance.

SQL> create spfile from pfile='/u02/oradata/prac/initstdb.ora';

SQL> startup mount;

ORACLE instance started.

 Total System Global Area  524288000 bytes

Fixed Size                  1995688 bytes

Variable Size             213912664 bytes

Database Buffers          306184192 bytes

Redo Buffers                2195456 bytes

Database mounted.

 SQL> alter database recover managed standby database disconnect from session;

 Database altered.

 YES!!!

Step 8 Test the Physical Standby Database is performing properly

Force several log switches on the primary database with:

alter system switch logfile;

 On the standby database connect as sysdba and check with;

Page 27: Oracle Important Notes

select sequence#, applied, first_time, next_time, archived, status

from v$archived_log order by sequence#;

 SEQUENCE# APPLIED   FIRST_TIME      NEXT_TIME       ARCHIVED  STA

———- ——— ————— ————— ——— —

       426 YES       28-FEB-06       28-FEB-06       YES       A

       427 YES       28-FEB-06       28-FEB-06       YES       A

       428 YES       28-FEB-06       28-FEB-06       YES       A

       429 YES       28-FEB-06       28-FEB-06       YES       A

       430 YES       28-FEB-06       28-FEB-06       YES       A

       431 YES       28-FEB-06       28-FEB-06       YES       A

       432 NO        28-FEB-06       28-FEB-06       YES       A

       433 NO        28-FEB-06       28-FEB-06       YES       A

In the applied column you can see up to which logfile the standby database has catched up.

 

To see the role of the standby database do:

select database_role, db_unique_name, name from v$database;

 DATABASE_ROLE        DB_UNIQUE_NAME       NAME

——————– ——————– —————————

PHYSICAL STANDBY     stdb                 PRAC

  You can also check the file system to see that the archived log files really arrive on the standby destination site.

Troubleshoot tips

Always check the alert file on both the primary and the standby database site.

Page 28: Oracle Important Notes

On Unix systems use tail –f alertprac.log to see the new messages appear at the end of the file.

 You might get errors like these:

Error 12514 received logging on to the standby

Tue Feb 28 13:07:22 2006

Errors in file /u01/sw1/app/oracle/admin/prac/bdump/prac1_arc5_11753.trc:

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

PING[ARC5]: Heartbeat failed to connect to standby 'stdb'. Error is 12514.

 OR

 Errors in file /u01/sw1/app/oracle/admin/prac/bdump/prac3_arcj_29779.trc:

ORA-12545: Connect failed because target host or object does not exist

PING[ARCj]: Heartbeat failed to connect to standby 'stdb'. Error is 12545.

 ==> These are Oracle Net Configuration errors, fix in listener.ora, tnsnames.ora or sqlnet.ora.

 Test the connectivity from the primary database to the standby with:

 [oracle@linux1 admin]$ sqlplus system/arnhem@stdb

 SQL*Plus: Release 10.2.0.1.0 - Production on Tue Feb 28 13:23:01 2006

 Copyright (c) 1982, 2005, Oracle.  All rights reserved.

 ERROR:

ORA-01033: ORACLE initialization or shutdown in progress

 Enter user-name:

THIS IS THE CORRECT MESSAGE YOU SHOULD GET SINCE THE STANDBY DATABASE IS IN MOUNT MODE!

or You might get the following error:

 Tue Feb 28 12:15:21 2006

Errors in file /u01/sw1/app/oracle/admin/prac/bdump/prac1_mrp0_31715.trc:

Page 29: Oracle Important Notes

ORA-00313: open failed for members of log group 8 of thread 4

ORA-00312: online log 8 thread 4: '/u02/oradata/pracredo08.log'

ORA-27037: unable to obtain file status

Linux-ia64 Error: 2: No such file or directory

Additional information: 3

Clearing online redo logfile 8 /u02/oradata/pracredo08.log

Clearing online log 8 of thread 4 sequence number 334

Tue Feb 28 12:15:21 2006

 Thus:  /u02/oradata/pracredo08.log   There is a missing “/” between prac and redo08.log?

 Check the parameter log_file_name_convert: /u02/oradata/prac/, /u02/oradata/prac

AHA a Typo change it with:

alter system set log_file_name_convert = '/u02/oradata/prac/', '/u02/oradata/prac/' scope=spfile;

Note that you have to recycle the instance because the scope is spfile only!

So parameter settings are KEY!

 

Popularity: 64%

Posted in Oracle Infrastructure, Databases, Operating Systems, Networks |

2 Responses to “PoC MAA -Blog 10 - Implementing a Physical Standby Database”

1. SHAHIQUE ANWER Says: February 8th, 2007 at 1:23 pm

Clearing online log 1 of thread 1 sequence number 4

2. overmars Says: July 11th, 2007 at 10:40 am

your standby database is not a RAC database?

The following parameter should seting on parmary sitealter system set db_file_name_convert=’prac’,’stdb’ scope=spfile;

Page 30: Oracle Important Notes

alter system set log_file_name_convert = ‘prac’, ’stdb’ scope=spfile;

  Global Guide Line

Our mission is to achieve excellence

Page 31: Oracle Important Notes

 Free Offers

  Home

  SEO

  Web Hosting

  Sitemap

  Contact

  Services

Partners

Simulating ASM with 10g Database by faking the hardware

Caution: These methods are for Testing & Learning purposes only.

Most of the Oracle 10g databases are using ASM for storage as it's very simple to maintain the storage w.r.t. to Disks, Datafiles etc.

You can get most of the ASM details & implementation details from OTN here: http://www.oracle.com/technology/asm/index.html

When I started to learn ASM, the real difficult part was to get a test hardware to play with, I looked are to find ways to simulate the asm env & have documented the procedure as below:

Simulating Asm by faking hardware:1. Faking Hardware2. Installing ASM Lib3. Configuring the disks4. Install DB & ASM instance

Faking Hardware:

Step 1) Here we create 4 zero-filed files using the DD comand

As "root" run the following commands, this will create 4 files of 2Gb each.

mkdir /asmdiskdd if=/dev/zero of=/asmdisk/disk1 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk2 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk3 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk4 bs=1024k count=2000

Step 2) Use the loopback device to assign these disks.

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4

Now we need to configure entries to the file "/etc/rc.local" so that these divices are re-initialised on reboot also.

Add the following entries to the file "/etc/rc.local"

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4/etc/init.d/oracleasm createdisk ASM1 /dev/loop1/etc/init.d/oracleasm createdisk ASM2 /dev/loop2/etc/init.d/oracleasm createdisk ASM3 /dev/loop3/etc/init.d/oracleasm createdisk ASM4 /dev/loop4Instaling ASM Lib

ASMLib is a support library for the Automatic Storage Management feature of Oracle Database 10g. ASMLib allows an Oracle Database using ASM more efficient and capable access to the disk groups it is using. ASMlib are provided by Oracle from below link : http://www.oracle.com/technology/tech/linux/asmlib/index.html

Page 32: Oracle Important Notes

Is my Operating System 64-bit?

In Solaris, from the command line (you don’t have to be root in most cases) run this command:

/usr/bin/isainfo -kv

If your OS is 64-bit, you will see output like:

64-bit sparcv9 kernel modules

If your OS is 32-bit, you will get this output:

32-bit sparc kernel modules

For Linux users

If you are running Linux, you can check your distribution with the uname command:

uname -m

The output will read x86_64 for 64-bit and i686 or similar for 32-bit.

How about this Oracle install? Is it 64-bit?

The question here is weather your Oracle binaries are 64-bit. While some of the binaries associated with Oracle may be 32-bit, the important ones will be 64 bit. To check those, follow these steps from the command line:

cd $ORACLE_HOME/binfile oracl*

This will display the file type of your oracle binaries. If you are running 64-bit binaries, the output should look like this:

oracle: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not strippedoracleO: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not stripped

If your binaries are 32-bit, the output will look like this:

oracle: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not stripped

Simulating ASM with 10g Database by faking the hardware

Caution: These methods are for Testing & Learning purposes only.

Most of the Oracle 10g databases are using ASM for storage as it's very simple to maintain the storage w.r.t. to Disks, Datafiles etc.

You can get most of the ASM details & implementation details from OTN here: http://www.oracle.com/technology/asm/index.html

When I started to learn ASM, the real difficult part was to get a test hardware to play with, I looked are to find ways to simulate the asm env & have documented the procedure as below:

Simulating Asm by faking hardware:1. Faking Hardware2. Installing ASM Lib3. Configuring the disks4. Install DB & ASM instance

Faking Hardware:

Step 1) Here we create 4 zero-filed files using the DD comand

As "root" run the following commands, this will create 4 files of 2Gb each.

mkdir /asmdiskdd if=/dev/zero of=/asmdisk/disk1 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk2 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk3 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk4 bs=1024k count=2000

Step 2) Use the loopback device to assign these disks.

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4

Now we need to configure entries to the file "/etc/rc.local" so that these divices are re-initialised on reboot also.

Add the following entries to the file "/etc/rc.local"

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4/etc/init.d/oracleasm createdisk ASM1 /dev/loop1/etc/init.d/oracleasm createdisk ASM2 /dev/loop2/etc/init.d/oracleasm createdisk ASM3 /dev/loop3/etc/init.d/oracleasm createdisk ASM4 /dev/loop4Instaling ASM Lib

ASMLib is a support library for the Automatic Storage Management feature of Oracle Database 10g. ASMLib allows an Oracle Database using ASM more efficient and capable access to the disk groups it is using. ASMlib are provided by Oracle from below link : http://www.oracle.com/technology/tech/linux/asmlib/index.html

Page 33: Oracle Important Notes

If you find you are running 32-bit and decide to go to 64 be careful. The switch can be a bit tricky. Read the documentation closely and make sure your service contract is payed up!

unix command to find the files modified in the past 3 days

Finding Any Files Modified in the Past 3 Days

$ cd ~$ find . -mtime -3./.bash_history./examples./examples/preamble.txt./examples/other.txt./example1.fil./.viminfo

Now we start to really see the power of the find command. It has identified files not only in the working directory but in a subdirectory as well! Let’s verify the findings with some ls commands:

$ ls –alttotal 56drwxrwxr-x 2 tclark authors 4096 Feb 3 17:45 examples-rw——- 1 tclark tclark 8793 Feb 3 14:04 .bash_historydrwx—— 4 tclark tclark 4096 Feb 3 11:17 .-rw——- 1 tclark tclark 1066 Feb 3 11:17 .viminfo-rw-rw-r– 1 tclark tclark 0 Feb 3 09:00 example1.fil-rw-r–r– 1 tclark authors 0 Jan 27 00:22 umask_example.fildrwxr-xr-x 8 root root 4096 Jan 25 22:16 ..-rw-rw-r– 1 tclark tclark 0 Jan 13 21:13 example2.xxx-rw-r–r– 1 tclark tclark 120 Aug 24 06:44 .gtkrc-rw-r–r– 1 tclark tclark 24 Aug 18 11:23 .bash_logout-rw-r–r– 1 tclark tclark 191 Aug 18 11:23 .bash_profile-rw-r–r– 1 tclark tclark 124 Aug 18 11:23 .bashrc-rw-r–r– 1 tclark tclark 237 May 22 2003 .emacs-rw-r–r– 1 tclark tclark 220 Nov 27 2002 .zshrcdrwxr-xr-x 3 tclark tclark 4096 Aug 12 2002 .kde$ cd examples$ ls -alttotal 20drwxrwxr-x 2 tclark authors 4096 Feb 3 17:45 .-rw-rw-r– 1 tclark tclark 0 Feb 3 17:45 other.txt-rw-rw-r– 1 tclark authors 360 Feb 3 17:44 preamble.txtdrwx—— 4 tclark tclark 4096 Feb 3 11:17 ..-rw-r–r– 1 tclark authors 2229 Jan 13 21:35 declaration.txt-rw-rw-r– 1 tclark presidents 1310 Jan 13 17:48 gettysburg.txt

So we see that find has turned up what we were looking for. Now we will refine our search even further.

Finding .txt Files Modified in the Past 3 Days

Simulating ASM with 10g Database by faking the hardware

Caution: These methods are for Testing & Learning purposes only.

Most of the Oracle 10g databases are using ASM for storage as it's very simple to maintain the storage w.r.t. to Disks, Datafiles etc.

You can get most of the ASM details & implementation details from OTN here: http://www.oracle.com/technology/asm/index.html

When I started to learn ASM, the real difficult part was to get a test hardware to play with, I looked are to find ways to simulate the asm env & have documented the procedure as below:

Simulating Asm by faking hardware:1. Faking Hardware2. Installing ASM Lib3. Configuring the disks4. Install DB & ASM instance

Faking Hardware:

Step 1) Here we create 4 zero-filed files using the DD comand

As "root" run the following commands, this will create 4 files of 2Gb each.

mkdir /asmdiskdd if=/dev/zero of=/asmdisk/disk1 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk2 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk3 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk4 bs=1024k count=2000

Step 2) Use the loopback device to assign these disks.

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4

Now we need to configure entries to the file "/etc/rc.local" so that these divices are re-initialised on reboot also.

Add the following entries to the file "/etc/rc.local"

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4/etc/init.d/oracleasm createdisk ASM1 /dev/loop1/etc/init.d/oracleasm createdisk ASM2 /dev/loop2/etc/init.d/oracleasm createdisk ASM3 /dev/loop3/etc/init.d/oracleasm createdisk ASM4 /dev/loop4Instaling ASM Lib

ASMLib is a support library for the Automatic Storage Management feature of Oracle Database 10g. ASMLib allows an Oracle Database using ASM more efficient and capable access to the disk groups it is using. ASMlib are provided by Oracle from below link : http://www.oracle.com/technology/tech/linux/asmlib/index.html

Page 34: Oracle Important Notes

Sometimes we are only concerned specific files in the directory. For example, say you wrote a text file sometime in the past couple days and now you can’t remember what you called it or where you put it. Here’s one way you could find that text file without having to go through your entire system:

$ find . -name '*.txt' -mtime -3./preamble.txt./other.txt

Now you’ve got even fewer files than in the last search and you could easily identify the one you’re looking for.

Find files by size

If a user is running short of disk space, they may want to find some large files and compress them to recover space. The following will search from the current directory and find all files larger than 10,000KB. The output has been abbreviated.

Finding Files Larger than 10,000k

# find . -size +10000k./proc/kcore./var/lib/rpm/Packages./var/lib/rpm/Filemd5s…./home/stage/REPCA/repCA/wireless/USData.xml./home/stage/REPCA/repCA/wireless/completebootstrap.xml./home/stage/REPCA/repCA/wireless/bootstrap.xml./home/bb/bbc1.9e-btf/BBOUT.OLD

Similarly a – could be used in this example to find all files smaller than 10,000KB. Of course there would be quite a few of those on a Linux system.

The find command is quite flexible and accepts numerous options. We have only covered a couple of the options here but if you want to check out more of them take a look at find’s man page.

Most of find’s options can be combined to find files which meet several criteria. To do this we can just continue to list criteria like we did when finding .txt files which had been modified in the past three days.

Finding any files modified in the last 7 days.

# dateFri Nov 30 12:01:52 IST 2007

# pwd/# find / -mtime -7

/oracle/ora10gr2/oracle/product/10.2.0/oradata/oracle/control02.ctl/oracle/ora10gr2/oracle/product/10.2.0/oradata/oracle/control03.ctl/oracle/ora10gr2/oracle/product/10.2.0/oradata/oracle/control01.ctl

Simulating ASM with 10g Database by faking the hardware

Caution: These methods are for Testing & Learning purposes only.

Most of the Oracle 10g databases are using ASM for storage as it's very simple to maintain the storage w.r.t. to Disks, Datafiles etc.

You can get most of the ASM details & implementation details from OTN here: http://www.oracle.com/technology/asm/index.html

When I started to learn ASM, the real difficult part was to get a test hardware to play with, I looked are to find ways to simulate the asm env & have documented the procedure as below:

Simulating Asm by faking hardware:1. Faking Hardware2. Installing ASM Lib3. Configuring the disks4. Install DB & ASM instance

Faking Hardware:

Step 1) Here we create 4 zero-filed files using the DD comand

As "root" run the following commands, this will create 4 files of 2Gb each.

mkdir /asmdiskdd if=/dev/zero of=/asmdisk/disk1 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk2 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk3 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk4 bs=1024k count=2000

Step 2) Use the loopback device to assign these disks.

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4

Now we need to configure entries to the file "/etc/rc.local" so that these divices are re-initialised on reboot also.

Add the following entries to the file "/etc/rc.local"

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4/etc/init.d/oracleasm createdisk ASM1 /dev/loop1/etc/init.d/oracleasm createdisk ASM2 /dev/loop2/etc/init.d/oracleasm createdisk ASM3 /dev/loop3/etc/init.d/oracleasm createdisk ASM4 /dev/loop4Instaling ASM Lib

ASMLib is a support library for the Automatic Storage Management feature of Oracle Database 10g. ASMLib allows an Oracle Database using ASM more efficient and capable access to the disk groups it is using. ASMlib are provided by Oracle from below link : http://www.oracle.com/technology/tech/linux/asmlib/index.html

Page 35: Oracle Important Notes

# cd /oracle/ora10gr2/oracle/product/10.2.0/oradata/oracle

# ls -al

total 1438020drwxr-x---  3 oracle dba      4096 Oct 16 12:48 .drwxr-x---  3 oracle dba      4096 Oct  9 16:56 ..-rw-r--r--  1 oracle dba   7258112 Nov 30 12:01 control01.ctl-rw-r--r--  1 oracle dba   7258112 Nov 30 12:01 control02.ctl-rw-r--r--  1 oracle dba   7258112 Nov 30 12:01 control03.ctl

# cd /

# find / -name '*.log' -mtime -7

/var/log/boot.log

Finding files by type .trc which has been changed for the past 7 days

$ iduid=501(oracle) gid=501(dba) groups=501(dba)

$ cd $ORACLE_HOME

$ pwd/oracle/ora10gr2/oracle/product/10.2.0/db_1

$ find . -name '*.trc' -mtime -7

./admin/oracle/bdump/oracle_arc3_4232.trc

./admin/oracle/bdump/oracle_arc1_4228.trc

./admin/oracle/bdump/oracle_arc2_4230.trc

./admin/oracle/bdump/oracle_arc0_4226.trc

Most CPU utilized by the Oracle Qurey

# prstat   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP  2786 ora10g   3814M 3781M cpu1     0    0   0:02:16  37% oracle/11  2801 ora10g   3814M 3780M sleep   59    0   0:01:43  14% oracle/11  2780 ora10g   3814M 3780M sleep   59    0   0:01:03 0.2% oracle/11  2294 ora10g   3384K 3272K sleep   59    0   0:00:04 0.1% prstat/1 20377 ora10g   3810M 3771M sleep   59    0   0:07:12 0.1% oracle/1 20405 ora10g   3809M 3770M sleep   59    0   0:05:31 0.1% oracle/1  2930 root     3352K 3224K cpu0    59    0   0:00:00 0.1% prstat/1 20478 ora10g    216M  109M sleep   29   10   0:05:24 0.1% java/24 20401 ora10g   3811M 3773M sleep   59    0   0:03:06 0.0% oracle/1  1680 noaccess  205M   47M sleep   59    0   1:00:12 0.0% java/25 20395 ora10g   3814M 3775M sleep   59    0   0:02:40 0.0% oracle/19 20485 ora10g   3812M 3778M sleep   59    0   0:04:00 0.0% oracle/1 20393 ora10g   3824M 3774M sleep   59    0   0:56:06 0.0% oracle/15 20489 ora10g   3810M 3776M sleep   59    0   0:00:57 0.0% oracle/1

Simulating ASM with 10g Database by faking the hardware

Caution: These methods are for Testing & Learning purposes only.

Most of the Oracle 10g databases are using ASM for storage as it's very simple to maintain the storage w.r.t. to Disks, Datafiles etc.

You can get most of the ASM details & implementation details from OTN here: http://www.oracle.com/technology/asm/index.html

When I started to learn ASM, the real difficult part was to get a test hardware to play with, I looked are to find ways to simulate the asm env & have documented the procedure as below:

Simulating Asm by faking hardware:1. Faking Hardware2. Installing ASM Lib3. Configuring the disks4. Install DB & ASM instance

Faking Hardware:

Step 1) Here we create 4 zero-filed files using the DD comand

As "root" run the following commands, this will create 4 files of 2Gb each.

mkdir /asmdiskdd if=/dev/zero of=/asmdisk/disk1 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk2 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk3 bs=1024k count=2000dd if=/dev/zero of=/asmdisk/disk4 bs=1024k count=2000

Step 2) Use the loopback device to assign these disks.

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4

Now we need to configure entries to the file "/etc/rc.local" so that these divices are re-initialised on reboot also.

Add the following entries to the file "/etc/rc.local"

/sbin/losetup /dev/loop1 /asmdisk/disk1/sbin/losetup /dev/loop2 /asmdisk/disk2/sbin/losetup /dev/loop3 /asmdisk/disk3/sbin/losetup /dev/loop4 /asmdisk/disk4/etc/init.d/oracleasm createdisk ASM1 /dev/loop1/etc/init.d/oracleasm createdisk ASM2 /dev/loop2/etc/init.d/oracleasm createdisk ASM3 /dev/loop3/etc/init.d/oracleasm createdisk ASM4 /dev/loop4Instaling ASM Lib

ASMLib is a support library for the Automatic Storage Management feature of Oracle Database 10g. ASMLib allows an Oracle Database using ASM more efficient and capable access to the disk groups it is using. ASMlib are provided by Oracle from below link : http://www.oracle.com/technology/tech/linux/asmlib/index.html

Page 36: Oracle Important Notes

 20383 ora10g   3819M 3778M sleep   59    0   0:03:33 0.0% oracle/258 26511 ora10g   3810M 3776M sleep   59    0   0:00:10 0.0% oracle/1 20387 ora10g   3816M 3777M sleep   59    0   0:03:35 0.0% oracle/258 20512 ora10g   3811M 3777M sleep   59    0   0:00:44 0.0% oracle/1 10516 ora10g     33M   22M sleep   59    0   0:35:16 0.0% emagent/6 20397 ora10g   3812M 3774M sleep   59    0   0:00:47 0.0% oracle/1  2778 btshuser 3016K 2240K sleep   59    0   0:00:00 0.0% bash/1Total: 112 processes, 1334 lwps, load averages: 1.22, 1.11, 1.05# ps -ef | grep 2786  ora10g  2786   665  42 11:57:40 ?           2:26 oracleKNIGHTS (LOCAL=NO)    root  2931  1866   0 12:02:49 pts/1       0:00 grep 2786# su - ora10gSun Microsystems Inc.   SunOS 5.10      Generic January 2005You have new mail.-bash-3.00$ sqlplus " / as sysdba "

SQL*Plus: Release 10.2.0.3.0 - Production on Wed Dec 12 12:03:04 2007

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit ProductionWith the Partitioning, OLAP and Data Mining options

SQL> select name from v$database;

NAME---------KNIGHTS

SQL> select ses.sid SID,sqa.SQL_TEXT SQL from v$session ses, v$sqlarea sqa, v$process proc where ses.paddr=proc.addr and ses.sql_hash_value=sqa.hash_value and proc.spid= 2972;

       SID----------SQL--------------------------------------------------------------------------------       967 select /*+ use_merge(a b) index(b PK_ELMNTRY_PDT_ELMNTRY_PDT_ID) */  b.ELMNTRY_PDT_ID,b.ELMNTRY_PDT_NAME, b.ELMNTRY_PDT_ACTIV_FRM_DT, b.ELMNTRY_PDT_ACTIV_TO_DT, b.ELMNTRY_PDT_TYP_FLG, b.ELMNTRY_PDT_DESC, b.NRML_PDT_FLG, b.MS_PDT_ID,b.IMPRT_SEQ, a.ms_pdt_name, a.ms_pdt_desc, b.EXTN_FLG, b.rv_flg from UC_ELMNTRY_PRODUCT_MS b,(SELECT owner_id FROM PM_VIEW WHERE PRVDR_ID =:"SYS_B_00" AND TAB_DESC = :"SYS_B_01" UNION (SELECT XYZ.ELMNTRY_PDT_ID FROM(SELECT A.ELMNTRY_PDT_ID, A.MS_PDT_ID FROM UC_ELMNTRY_PRODUCT_MS A WHERE A.ELMNTRY_PDT_ID NOT IN  (SELECT OWNER_

Page 37: Oracle Important Notes

ID FROM PM_VIEW WHERE TAB_DESC = :"SYS_B_02" )) XYZ, (SELECT A.MS_PDT_ID FROM UC

       SID----------SQL--------------------------------------------------------------------------------_MASTER_PRODUCT_MS A WHERE A.MS_PDT_ID NOT IN ( SELECT OWNER_ID FROM PM_VIEW WHERE TAB_DESC = :"SYS_B_03" )) ABC WHERE XYZ.MS_PDT_ID = ABC.MS_PDT_ID UNION SELECT XYZ.ELMNTRY_PDT_ID FROM( SELECT A.ELMNTRY_PDT_ID, A.MS_PDT_ID FROM UC_ELMNTRY_PRODUCT_MS A WHERE A.ELMNTRY_PDT_ID NOT IN (SELECT OWNER_ID FROM PM_VIEW WHERE TAB_DESC = :"SYS_B_04" )) XYZ, (SELECT A.

SQL> select SID,SERIAL#,USERNAME from v$session where SID=967;

       SID    SERIAL# USERNAME---------- ---------- ------------------------------       967       2426 SIT_STAT SQL> select s.sid,p.spid,s.osuser,s.username,s.program from v$session s ,v$process p where s.paddr=p.addr and S.sid in ('967');  SID       SPID         OSUSER     USERNAME     PROGRAM ----------------------------------------------------------------------- 967       3981         wls815       SIT_STAT     JDBC Thin Client

Oracle Memory Allocations on Windows

Oracle's implementation of database on Windows has some specifics not seen on other platforms (*nix, Linux). Main difference is that all Oracle processes (PMON, SMON, even server processes) run within single OS process as separate threads. The reason for this is that on Windows you can't start a background process other than a "service". Otherwise you have to start it in a user session and it will be stopped when session is ended (user logged out). So, Oracle had to "pack" all the background processes as threads within a single process that you see in "Services" panel on Windows as your "OracleServiceORCL" or something else with ORCL representing your database's SID.

Main impact of this architecture is that the maximum amount of memory that can be used for both SGA and PGA is limited by a maximum amount of memory available to single process. On Windows (we assume a 32-bit platform in this article), the default maximum user addressable memory is 2GB. It is a limitation imposed by a 32-bit addressing (4GB) and fact that Windows reserves 2GB out of user space for kernel memory.So, this means that sum of SGA and PGA cannot be over 2GB... Not a very pleasant

Page 38: Oracle Important Notes

situation, given that nowadays it is rather common to have a cheap Intel server running Windows with Oracle with 4GB of RAM.

What can be done to get around this? Well, good news is that you can do something...

First, you can use a /3GB switch in boot.ini file. Just put it like this:

[boot loader]timeout=30default=multi(0)disk(0)rdisk(1)partition(2)\WINNT[operating systems]multi(0)disk(0)rdisk(1)partition(2)\WINNT="Microsoft Windows 2000 Server" /fastdetect /3GB

What does it do? Basically, what it looks like - pushes the limit of 2GB to 3GB? How? - Windows reserves not 2GB but only 1GB for kernel use. This gives us extra 1GB for user process.

If you have over 4GB of RAM, you can also use another mecahnism in Windows to leveragr this memory. It is called Address Windowing Extensions (AWE) and is enabled by /PAE switch in boot.ini:

[boot loader]timeout=30default=multi(0)disk(0)rdisk(1)partition(2)\WINNT[operating systems]multi(0)disk(0)rdisk(1)partition(2)\WINNT="Microsoft Windows 2000 Server" /fastdetect /3GB /PAE

But just setting the switch is not enough. Now you need to configure Oracle to use the extra memory you've got in there. How Oracle does it is that it "maps" the extra memory through a "window" in the usual memory space. One limitation here is that Oracle can do it only for buffer cache, not shared pool.So, what you need to set in Oracle is:

·  USE_INDIRECT_DATA_BUFFERS=TRUE in init.ora ·  increase DB_BLOCK_BUFFERS in init.ora (Note: if you use Oracle 9.2 and use DB_CACHE_SIZE instead of DB_BLOCK_BUFFERS, you will get an error starting instance. Comment out DB_CACHE_SIZE and use DB_BLOCK_BUFFERS instead) ·  You might want to adjust a registry setting in HKLM\Software\Oracle\Homex called AWE_WINDOW_MEMORY. It specifies the size of the "window" that Oracle will use for using the extra memory. See Metalink Note 225349.1 for details on calculating miminum size of this setting.

So, to summarize all of the above:

·  use /3Gb switch to increase memory space for Oracle from default 2Gb to 3GB ·  use /PAE and USE_INDIRECT_BUFFERS to utilize memory over 4GB ·  remember, that SGA and PGA come from the same address space. So, do not set your SGA to 3GB - you will have no memory for client processes then. You need to

Page 39: Oracle Important Notes

estimate client processes memory requirements and size SGA accordingly to have enough memory for PGA. (On 9i you can use PGA_AGGREGATE_TARGET to bettre manage PGA)

The Need for Virtual Private Databases

When I first began working for Oracle, I was asked to work on a Department of Defense (DoD) project that was using a special version of Oracle called Trusted Oracle. Trusted Oracle ran on special “trusted” operating systems. I was familiar with Oracle, and I was familiar with UNIX operating systems, but working with Trusted Oracle was really bizarre. A lot of what I had learned about access controls and security was somehow deficient in this world.

The one behavior that I quickly realized was distinctly different was that Trusted Oracle transparently filtered data records. I found out that the DoD security requirements dictated mandatory separation of records based on a user’s authorizations. In this case the users were authorized for access at different sensitivity levels—SECRET, CONFIDENTIAL, and UNCLASSIFIED. The data was intermingled within tables at various sensitivity levels. One user accessing the data would see one set of records, and a different user with different authorizations would see a different set of records.

The interesting part was that the security was implemented so that it was transparent and could not be subverted. The manner in which Trusted Oracle behaved and the requirements from customers in other industries gave Oracle the idea of abstracting the row-level security features from Trusted Oracle into a framework that would support practically any data model and security policy. This was the genesis of the Virtual Private Database technology.

Officially, the phrase “Virtual Private Database (VPD)” refers to the use of row-level security (RLS) and the use of application contexts. (Application contexts were discussed in detail in Chapter 9.) However, the term “VPD” is commonly used when discussing the use of the row-level security features irrespective of implementation.

Row-Level Security Quick Start

Many examples you see using VPD involve the use of application contexts and/or several data tables with esoteric column names and complicated referential integrity constraints. I find that these elements, while truthful in their representation of many database schemas, tend to confuse and mislead the reader about how the row-level security technology works and precisely what is needed to enable it. Using RLS is easy, and the purpose of this section is to prove this very point.

VPD’s row-level security allows you to restrict access to records based on a security policy implemented in PL/SQL. A security policy, as used here, simply describes the rules governing access to the data rows. This process is done by creating a PL/SQL function that returns a string. The function is then registered against the tables, views, or synonyms you want to protect by using the DBMS_RLS PL/SQL package. When a query is issued against the protected object, Oracle effectively appends the string returned from the function to the original SQL statement, thereby filtering the data records.

Page 40: Oracle Important Notes

Quick Start Example

This example will focus on the process required to enable RLS. The intention is to keep the data and security policy simple so as not to distract from how to enable an RLS solution.

The RLS capability in Oracle requires a PL/SQL function. The function accepts two parameters, as shown next. The database will call this function automatically and transparently. The string value returned from the function (called the predicate) will be effectively added to the original SQL. This results in an elimination of rows and thus provides the row-level security.

The security policy for this example will exclude Department 10 records from queries on SCOTT.EMP. The PL/SQL function to implement this will look as follows:

sec_mgr@KNOX10g> CREATE OR REPLACE FUNCTION no_dept10(  2    p_schema  IN VARCHAR2,  3    p_object  IN VARCHAR2)  4    RETURN VARCHAR2  5  AS  6  BEGIN  7    RETURN 'deptno != 10';  8  END;  9  /Function created.

To protect the SCOTT.EMP table, simply associate the preceding PL/SQL function to the table using the DBMS_RLS.ADD_POLICY procedure:

sec_mgr@KNOX10g> BEGIN  2    DBMS_RLS.add_policy  3       (object_schema    => 'SCOTT',  4        object_name      => 'EMP',  5        policy_name      => 'quickstart',  6        policy_function  => 'no_dept10');  7  END;  8  /PL/SQL procedure successfully completed.

That’s it; you are done! To test this policy, log on as a user with access to the SCOTT.EMP table and issue your DML. The following shows all the department numbers available in the table. Department 10 is no longer seen because the RLS policy transparently filters out those records:

scott@KNOX10g> -- Show department numbers. scott@KNOX10g> -- There should be no department 10.scott@KNOX10g> SELECT DISTINCT deptno FROM emp;   DEPTNO---------       20       30

The important point is that row-level security can be trivial to implement.

Page 41: Oracle Important Notes

NOTE

RLS has no requirements or dependencies on the use of application contexts, the user’s identity, or the predicate referencing the table’s columns.

Changing the security implementation is trivial, too. Suppose the security policy is changed so that no records should be returned for the user SYSTEM:

sec_mgr@KNOX10g> -- change policy implementation tosec_mgr@KNOX10g> -- remove all records for the SYSTEM usersec_mgr@KNOX10g> CREATE OR REPLACE FUNCTION no_dept10 (  2    p_schema  IN  VARCHAR2,  3    p_object  IN  VARCHAR2)  4    RETURN VARCHAR2  5  AS  6  BEGIN  7    RETURN 'USER != ''SYSTEM''';  8  END;  9  /Function created.sec_mgr@KNOX10g> -- Test by counting records as SCOTTsec_mgr@KNOX10g> -- then by counting records as SYSTEMsec_mgr@KNOX10g> CONN scott/tigerConnected.scott@KNOX10g> SELECT COUNT(*) Total_Records FROM emp;TOTAL_RECORDS-------------           14scott@KNOX10g> CONN system/manager Connected.system@KNOX10g> SELECT COUNT(*) Total_Records FROM scott.emp;TOTAL_RECORDS-------------            0

Notice that the security policy implemented by the function can change without requiring any re-registration with the DBMS_RLS package.

rem -----------------------------------------------------------------------rem Filename:   sga_stat.sqlrem Purpose:    Display database SGA statisticsrem Date:       14-Jun-2001rem Author:     Anjan Roy ([email protected])rem -----------------------------------------------------------------------

DECLARE      libcac number(10,2);      rowcac number(10,2);      bufcac number(10,2);      redlog number(10,2);      spsize number;

Page 42: Oracle Important Notes

      blkbuf number;      logbuf number;BEGINselect value into redlog from v$sysstatwhere name = 'redo log space requests';select 100*(sum(pins)-sum(reloads))/sum(pins) into libcac from v$librarycache;select 100*(sum(gets)-sum(getmisses))/sum(gets) into rowcac from v$rowcache;select 100*(cur.value + con.value - phys.value)/(cur.value + con.value) into bufcacfrom v$sysstat cur,v$sysstat con,v$sysstat phys,v$statname ncu,v$statname nco,v$statname nphwhere cur.statistic# = ncu.statistic#        and ncu.name = 'db block gets'        and con.statistic# = nco.statistic#        and nco.name = 'consistent gets'        and phys.statistic# = nph.statistic#        and nph.name = 'physical reads';select value into spsize  from v$parameter where name = 'shared_pool_size';select value into blkbuf  from v$parameter where name = 'db_block_buffers';select value into logbuf  from v$parameter where name = 'log_buffer';dbms_output.put_line('>                   SGA CACHE STATISTICS');dbms_output.put_line('>                   ********************');dbms_output.put_line('>              SQL Cache Hit rate = '||libcac);dbms_output.put_line('>             Dict Cache Hit rate = '||rowcac);dbms_output.put_line('>           Buffer Cache Hit rate = '||bufcac);dbms_output.put_line('>         Redo Log space requests = '||redlog);dbms_output.put_line('> ');dbms_output.put_line('>                     INIT.ORA SETTING');dbms_output.put_line('>                     ****************');dbms_output.put_line('>               Shared Pool Size = '||spsize||' Bytes');dbms_output.put_line('>                DB Block Buffer = '||blkbuf||' Blocks');dbms_output.put_line('>                    Log Buffer  = '||logbuf||' Bytes');dbms_output.put_line('> ');if        libcac < 99  then dbms_output.put_line('*** HINT: Library Cache too low! Increase the Shared Pool Size.');END IF;if        rowcac < 85  then dbms_output.put_line('*** HINT: Row Cache too low! Increase the Shared Pool Size.');END IF;if        bufcac < 90  then dbms_output.put_line('*** HINT: Buffer Cache too low! Increase the DB Block Buffer value.');END IF;if        redlog > 100 then dbms_output.put_line('*** HINT: Log Buffer value is rather low!');END IF;END;/

Page 43: Oracle Important Notes

Friday, March 23, 2007Db file sequential read while doing full table scan?

These days, we are working on data warehouse in which we have a master table which will have 1.5m (approx) rows inserted every half hour and we have few fast refresh materialized view based on it. These mviews have some aggregate functions on it, which makes it a bit complex.

To start the experiment, each mview refreshes used to take some 18-20 mins, which is totally against the business requirement. Then we tried to figure out on why the mview refresh is taking so much time, in spite of dropping all the bitmap indexes on the mview (generally b-map indexes are not good for inserts/updates).

The 10046 trace (level 12) highlighted that there were many “db file sequential reads” on mview because of optimizer using “I_SNAP$_mview” to fetch the rows from mview and merge the rows with that of master table to make the aggregated data for the mview.

Good part of the story is access to master table was quite fast because we used direct load (using sqlldr direct=y) to insert the data in it. When you use direct load to insert the data, oracle maintains the list of rowids added to table in a view called “SYS.ALL_SUMDELTA”. So while doing fast mview refresh, news rows inserted are picked directly from table using the rowids given from ALL_SUMDELTA view and not from Mview log, so this saves time.

Concerned part was still Oracle using I_SNAP$ index while fetching the data from mview and there were many “db file sequential read” waits and it was clearly visible that Oracle waited on sequential read the most. We figured it out that full table scan (which uses scattered read, and multi block read count) was very fast in comparison to index access by running simple test against table. Also the tables are dependent mviews are only for the day. End of the day the master table and mview’s data will be pushed to historical tables and master table and mviews will be empty post midnight.

I gathered the stats of mview and then re-ran the mview refresh, and traced the session, and this time optimizer didn’t use the index which was good news.

Now the challenge was to run the mview stats gathering job every half an hour or induce wrong stats to table/index to ensure mview refresh never uses index access or may be to lock the stats using DBMS_STATS.LOCK_TABLE_STATS.

But we found another solution by creating the mview with “USING NO INDEX” clause. This way “I_SNAP$” index is not created with “CREATE MATERIALIZED VIEW’ command. As per Oracle the “I_SNAP$” index is good for fast refresh but it proved to be reverse for us because our environment is different and the data changes is quite frequent.

Now, we ran the tests again, we loaded 48 slices of data (24 hrs x 2 times within hour) and the results were above expectations. We could load the data with max 3 mins per load of data.

Page 44: Oracle Important Notes

This is not the end of story. In the trace we could see the mview refresh using “MERGE” command and using full table scan access to mview (which we wanted) and rowid range access to master table.

The explain plan looks like:

Rows     Row Source Operation-------  ---------------------------------------------------      2  MERGE  SF_ENV_DATA_MV (cr=4598 pr=5376 pw=0 time=47493463 us) 263052   VIEW  (cr=3703 pr=3488 pw=0 time=24390284 us) 263052    HASH JOIN OUTER (cr=3703 pr=3488 pw=0 time=24127224 us) 263052     VIEW  (cr=1800 pr=1790 pw=0 time=14731732 us) 263052      SORT GROUP BY (cr=1800 pr=1790 pw=0 time=14205624 us) 784862       VIEW  (cr=1800 pr=1790 pw=0 time=3953958 us) 784862        NESTED LOOPS  (cr=1800 pr=1790 pw=0 time=3169093 us)      1         VIEW  ALL_SUMDELTA (cr=9 pr=0 pw=0 time=468 us)      1          FILTER  (cr=9 pr=0 pw=0 time=464 us)      1           MERGE JOIN CARTESIAN (cr=9 pr=0 pw=0 time=459 us)      1            NESTED LOOPS  (cr=6 pr=0 pw=0 time=99 us)      1             TABLE ACCESS BY INDEX ROWID OBJ$ (cr=3 pr=0 pw=0 time=56 us)      1              INDEX UNIQUE SCAN I_OBJ1 (cr=2 pr=0 pw=0 time=23 us)(object id 36)      1             TABLE ACCESS CLUSTER USER$ (cr=3 pr=0 pw=0 time=40 us)      1              INDEX UNIQUE SCAN I_USER# (cr=1 pr=0 pw=0 time=7 us)(object id 11)      1            BUFFER SORT (cr=3 pr=0 pw=0 time=354 us)      1             INDEX RANGE SCAN I_SUMDELTA$ (cr=3 pr=0 pw=0 time=243 us)(object id 158)      0           NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)      0            INDEX RANGE SCAN I_OBJAUTH1 (cr=0 pr=0 pw=0 time=0 us)(object id 103)      0            FIXED TABLE FULL X$KZSRO (cr=0 pr=0 pw=0 time=0 us)      0           FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us) 784862         TABLE ACCESS BY ROWID RANGE SF_ENV_SLICE_DATA (cr=1791 pr=1790 pw=0 time=2383760 us) 708905     MAT_VIEW ACCESS FULL SF_ENV_DATA_MV (cr=1903 pr=1698 pw=0 time=6387829 us)

You can see the access pattern as above.

Interesting twist in the story is when I saw the wait events in trace file.

  Event waited on                             Times   Max. Wait  Total Waited  ----------------------------------------   Waited  ----------  ------------  db file sequential read                      2253        0.74          7.73  db file scattered read                        240        1.05          6.77  log file switch completion                 6        0.98          3.16  log file switch                                    8        0.98          2.47

Page 45: Oracle Important Notes

  rdbms ipc reply                                 6        0.00          0.00  log buffer space                                3        0.42          0.61

Again, even when we are doing full table scan, there are “db file sequential reads”?

To confirm I opened the raw trace file (before tkprof), and checked the obj# on sequential read wait event, it was the mview (SF_ENV_DATA_MV) !! and there were many. To further investigate I checked if there were any scattered reads to mview or not. I found there were scattered reads but there were many sequential reads also on which Oracle waited more than that of scattered read which did most of the data fetching.

After giving some thought, I realized that we created the mviews without storage clause, which means Oracle created the mview with default storage clause.

So assuming there are 17 blocks in an mview (container table) extent and Multi block read count is 16, Oracle will use scattered read mechanism (multiple blocks) to read the first 16 blocks and for the rest 1 it will use sequential read mechanism (one block), so you will find many sequential reads wait events sandwiched between scattered reads. To overcome this we created the mview with larger extent sizes and also multiple of MBCR (multi block read count).

Also, another cause of sequential read is chained or migrated rows, if your mview (or table) rows are migrated, the pointer to the next row is maintained in old (original) block, which will always be read by a single i/o call i.e. by sequential read.You can check the count of chained rows using DBA_TABLES.CHAIN_CNT after analysing the table . So to overcome this, we created the mview with genuine pctfree so when the merge runs (as a part of mview refresh) and updates few rows, the rows are not moved to a different block and hence avoiding sequential read.

Conclusion:

1. Mview creation with “USING NO INDEX” does not create “I_SNAP$” index which sometimes help in fast refresh when the data changes are quite frequent and you cannot afford to collect stats after every few mins.

2. Create mview with storage clause suiting to your environment. Default extent sizes may not be always good.

3. PCTFREE can be quite handy to avoid sequential reads and extra block read.

Posted by Sachin at 2:01 AM 4 comments    

Labels: CBO, Tips, Tuning

Saturday, March 3, 2007Optimizer_mode – ALL_ROWS or FIRST_ROWS?

Out of all Oracle RDBMS modules, optimizer code is actually the most complicated code and different optimizer modes seem like jack while lifting your car in case of a puncture.

Page 46: Oracle Important Notes

This paper focuses on how optimizer behaves differently when you have optimizer mode set to ALL_ROWS or FIRST_ROWS.

Possible values for optimizer_mode = choose/ all_rows/ first_rows/ first_rows[n]

By default, the value of optimizer_mode is CHOOSE which basically means ALL_ROWS (if statistics on underlying tables exist) else RULE (if there are no statistics on underlying tables). So it is very important to have statistics collected on your tables on regular intervals or else you are living in Stone Age.

FIRST_ROWS and ALL_ROWS are both cost based optimizer features. You may use them according to their requirement.

FIRST_ROWS/ FIRST_ROWS[n]

In simple terms it ensures best response time of first few rows (n rows).

This mode is good for interactive client-server environment where server serves first few rows and by the time user scroll down for more rows, it fetches other. So user feels that he has been served the data he requested, but in reality the request is still pending and query is still fetching the data in background.

Best example for this is toad, if you click on data tab, it instantaneously start showing you data and you feel toad is faster than sqlplus, but the fact is if you scroll down, you will see the query is still running.

Ok, let us simulate this on SQLPLUS

Create a table and index over it:

SQL> create table test as select * from all_objects;

Table created.

SQL> create index test_in on test(object_type);

Index created.

SQL> exec dbms_stats.gather_table_stats(‘SAC’,'TEST')

PL/SQL procedure successfully completed.

SQL> select count(*) from test;

COUNT(*)----------37944

SQL> select count(*) from test where object_type='JAVA CLASS';

COUNT(*)

Page 47: Oracle Important Notes

----------14927

You see out of almost 38k records, 15k are of JAVA class. And now if you select the rows having object_type=’JAVA_CLASS’, it should not use index as almost half of the rows are JAVA_CLASS. It will be foolish of optimizer to read the index first and then go to table.

Check out the Explain plans

SQL> set autotrace traceonly expSQL> select * from test where object_type='JAVA CLASS';

Execution Plan----------------------------------------------------------Plan hash value: 1357081020

--------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |--------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1001 | 94094 | 10 (0)| 00:00:01 ||* 1 | TABLE ACCESS FULL| TEST | 1001 | 94094 | 10 (0)| 00:00:01 |--------------------------------------------------------------------------

As you see above, optimizer has not used Index we created on this table.

Now use FIRST_ROWS hint:

SQL> select /*+ FIRST_ROWS*/ * from test where object_type='JAVA CLASS';

Execution Plan----------------------------------------------------------Plan hash value: 3548301374

---------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |---------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 14662 | 1345K| 536 (1)| 00:00:07 || 1 | TABLE ACCESS BY INDEX ROWID| TEST | 14662 | 1345K| 536 (1)| 00:00:07 ||* 2 | INDEX RANGE SCAN | TEST_IN | 14662 | | 43 (3)| 00:00:01 |---------------------------------------------------------------------------------------

In this case, optimizer has used the index.

Q> Why?

Ans> Because you wanted to see first few rows quickly. So, following your instructions oracle delivered you first few rows quickly using index and later delivering the rest.

Page 48: Oracle Important Notes

See the difference in cost, although the response time (partial) of second query was faster but resource consumption was high.

But that does not mean that this optimizer mode is bad. As I said this mode may be good for interactive client-server model. In most of OLTP systems, where users want to see data fast on their screen, this mode of optimizer is very handy.

Important facts about FIRST_ROWS

1. It gives preference to Index scan Vs Full scan (even when index scan is not good).

2. It prefers nested loop over hash joins because nested loop returns data as selected (& compared), but hash join hashes one first input in hash table which takes time.

3. Cost of the query is not the only criteria for choosing the execution plan. It chooses plan which helps in fetching first rows fast.

4. It may be a good option to use this in an OLTP environment where user wants to see data as early as possible.

ALL_ROWS

In simple terms, it means better throughput

While FIRST_ROWS may be good in returning first few rows, ALL_ROWS ensures the optimum resource consumption and throughput of the query. In other words, ALL_ROWS is better to retrieve the last row first.

In above example while explaining FIRST_ROWS, you have already seen how efficient ALL_ROWS is.

Important facts about ALL_ROWS

1. ALL_ROWS considers both index scan and full scan and based on their contribution to the overall query, it uses them. If Selectivity of a column is low, optimizer may use index to fetch the data (for example ‘where employee_code=7712’), but if selectivity of column is quite high ('where deptno=10'), optimizer may consider doing Full table scan. With ALL_ROWS, optimizer has more freedom to its job at its best.

2. Good for OLAP system, where work happens in batches/procedures. (While some of the report may still use FIRST_ROWS depending upon the anxiety level of report reviewers)

3. Likes hash joins over nested loop for larger data sets.

Conclusion

Cost based optimizer gives you flexibility to choose response time or throughput. So use them based on your business requirement.

Posted by Sachin at 3:13 AM 0 comments    

Labels: CBO, Tuning

Page 49: Oracle Important Notes

Friday, March 2, 2007Nested loops, Hash join and Sort Merge joins – difference?

Nested loop (loop over loop)

In this algorithm, an outer loop is formed which consists of few entries and then for each entry, and inner loop is processed.

Ex:

Select tab1.*, tab2.* from tabl, tab2 where tabl.col1=tab2.col2;

It is processed like:

For i in (select * from tab1) loopFor j in (select * from tab2 where col2=i.col1) loopDisplay results;End loop;End loop;

The Steps involved in doing nested loop are:

a) <!--[endif]-->Identify outer (driving) table

<!--[if !supportLists]--> b) <!--[endif]-->Assign inner (driven) table to outer table.

<!--[if !supportLists]--> c) <!--[endif]-->For every row of outer table, access the rows of inner table.

In execution plan it is seen like this:

NESTED LOOPSouter_loopinner_loop

When optimizer uses nested loops?

Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving condition. It is important to have an index on column of inner join table as this table is probed every time for a new value from outer table.

Optimizer may not use nested loop in case:

<!--[endif]--> 1. No of rows of both the table is quite high 2. Inner query always results in same set of records 3. The access path of inner table is independent of data coming from outer table.

<!--[endif]-->

Page 50: Oracle Important Notes

Note: You will see more use of nested loop when using FIRST_ROWS optimizer mode as it works on model of showing instantaneous results to user as they are fetched. There is no need for selecting caching any data before it is returned to user. In case of hash join it is needed and is explained below.

Hash join

Hash joins are used when the joining large tables. The optimizer uses smaller of the 2 tables to build a hash table in memory and the scans the large tables and compares the hash value (of rows from large table) with this hash table to find the joined rows.

The algorithm of hash join is divided in two parts

<!--[if !supportLists]--><!--[endif]--> 1. Build a in-memory hash table on smaller of the two tables. 2. Probe this hash table with hash value for each row second table

<!--[endif]-->

In simpler terms it works like

Build phase

For each row RW1 in small (left/build) table loopCalculate hash value on RW1 join keyInsert RW1 in appropriate hash bucket.End loop;

Probe Phase

For each row RW2 in big (right/probe) table loopCalculate the hash value on RW2 join keyFor each row RW1 in hash table loopIf RW1 joins with RW2Return RW1, RW2End loop;End loop;

When optimizer uses hash join?

Optimizer uses has join while joining big tables or big fraction of small tables.

Unlike nested loop, the output of hash join result is not instantaneous as hash joining is blocked on building up hash table.

Note: You may see more hash joins used with ALL_ROWS optimizer mode, because it works on model of showing results after all the rows of at least one of the tables are hashed in hash table.

Sort merge join

Page 51: Oracle Important Notes

Sort merge join is used to join two independent data sources. They perform better than nested loop when the volume of data is big in tables but not as good as hash joins in general.

They perform better than hash join when the join condition columns are already sorted or there is no sorting required.

The full operation is done in two parts:

<!--[if !supportLists]-->Sort join operation

get first row RW1 from input1get first row RW2 from input2.

Merge join operation

while not at the end of either input loopif RW1 joins with RW2get next row R2 from input 2return (RW1, RW2)else if RW1 < style=""> get next row RW1 from input 1elseget next row RW2 from input 2end loop

Note: If the data is already sorted, first step is avoided.

Important point to understand is, unlike nested loop where driven (inner) table is read as many number of times as the input from outer table, in sort merge join each of the tables involved are accessed at most once. So they prove to be better than nested loop when the data set is large.

When optimizer uses Sort merge join?

<!--[if !supportLists]-->a) <!--[endif]-->When the join condition is an inequality condition (like <, <=, >=). This is because hash join cannot be used for inequality conditions and if the data set is large, nested loop is definitely not an option.

<!--[if !supportLists]-->b) <!--[endif]-->If sorting is anyways required due to some other attribute (other than join) like “order by”, optimizer prefers sort merge join over hash join as it is cheaper.

Note: Sort merge join can be seen with both ALL_ROWS and FIRST_ROWS optimizer hint because it works on a model of first sorting both the data sources and then start returning the results. So if the data set is large and you have FIRST_ROWS as optimizer goal, optimizer may prefer sort merge join over nested loop because of large data. And if you have ALL_ROWS as optimizer goal and if any inequality condition is used the SQL, optimizer may use sort-merge join over hash join

Page 52: Oracle Important Notes

Posted by Sachin at 6:47 AM 3 comments    

Labels: CBO, Tuning

Monday, January 29, 2007Index skip Scan – how?

With Oracle 9i, CBO is equipped with many more features, one of them is “Index skip scan” .This means even if you have a composite index on more than one column and you use the non-prefix column alone in your SQL, it may still use index.

I said “may” because CBO will calculate the cost of using the index and if it is more than that of full table scan, then it may not use index.

Index skip scan works differently from a normal index (range) scan.A normal range scan works from top to bottom first and then move horizontal.But a Skip scan includes several range scans in it. Since the query lacks the leading column it will rewrite the query into smaller queries and each doing a range scan.

Ex: SQL> create table test (a number, b number, c number);

Table created.

SQL> create index test_i on test(a,b);

Index created.

SQL> begin2 for i in 1 .. 1000003 loop4 insert into test values(mod(i, 5), i, 100);5 end loop;6 commit;7 end;8 /

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(ownname => 'SAC', tabname => 'test', cascade => true);

PL/SQL procedure successfully completed.

SQL> set autotrace trace expSQL> select * from test where b=95267;

Execution Plan----------------------------------------------------------0SELECT STATEMENT Optimizer=ALL_ROWS (Cost=22 Card=1 Bytes=10)

1 0

Page 53: Oracle Important Notes

TABLE ACCESS (BY INDEX ROWID) OF 'TEST' (TABLE) (Cost=22 Card=1 Bytes=10)

2 1INDEX (SKIP SCAN) OF 'TEST_I' (INDEX) (Cost=21 Card=1)

I above example, “select * from test where b=95267” was broken down to several small range scan queries. It was effectively equivalent to following

Select * from test where a=0 and b=95267UnionSelect * from test where a=1 and b=95267UnionSelect * from test where a=2 and b=95267UnionSelect * from test where a=3 and b=95267UnionSelect * from test where a=4 and b=95267;

In concrete, saying that skip scan is not as efficient as normal “single range scan” is correct. But yet saves some disk space and overhead of maintaining another index.

Posted by Sachin at 11:09 PM 0 comments    

Labels: CBO, Tuning

Friday, January 12, 2007SESSION_CACHED_CURSORS Vs CURSOR_SPACE_FOR_TIME - which, when and why?

They 2 considered to be the most important parameter for shared pool tuning, but I guess most of us generally don’t use them or sometimes use them incorrectly.

The idea to put them here to understand “what they do?”, “when to use them?”, “how to use them?” and finally “see the impact”.

SESSION_CACHED_CURSORS

In most of the environments, there are many SQL’s which are re-fired many a times within a session, and every time they are issued, the session searches the shared pool for the parsed state, if it doesn’t get the parsed version, it will “hard parse” it, and if it exists in shared pool, it will still do a “soft parse”.

As we know “hard parse” is a costly operation, even a “soft parse” requires library cache latch and CPU overhead, which if aggregated is a significant number.

This parameter if set to a non-zero value (default is 50), improves the “soft parse” performance by doing a softer soft parse.

How?

If enabled, oracle maintains a local session cache which stores recently closed cursors of a session.

Page 54: Oracle Important Notes

To avoid this space getting misused or overused, oracle maintains the cursors for which there have been 3 parsed calls in the past, so all the SQL’s issued by a session are not here. Remember each cursor if pinned here, is not freeable and hence you may require more shared pool area.

A normal cursor in shared pool is sum of 2 components:a) Heap 0 – size 1KBb) SQL Area – size multiple of 4k

When we use session_cached_cursors only first component of cursor which is HEAP 0 is pinned in local session cache and if there is a call for re-parse for a statement, Oracle first checks the existence of the cursor in local cache and if found, it gets the address of the rest of the cursor which is in SQL Area (assuming if it is not aged out), so hereby saving CPU overhead and library cache latch contention.

How much it is getting used ?

SQL> select max(value) from v$sesstat2 where STATISTIC# in (select STATISTIC# from v$statname where name='session cursor cache count');

MAX(VALUE)----------100

This shows maximum value for session_cached_cursors in past. If this value= “session_cached_cursors” (init.ora parameter), you should consider increasing it.

If you want to see how is your session cache doing?

SQL> select cache/tot*100 "Session cursor cache%"2 from3 (select value tot from v$sysstat where name='parse count (total)'),4 ( select value cache from sys.v_$sysstat where name = 'session cursor cache hits' );

Session cursor cache%---------------------106.635284

A value near 100 is considered very good. But you may still consider increasing this parameter if MAX(VALUE) in query one shows you equal number of cached cursor which you have set.

Conclusion: In an OLTP application, where the same set of SQL is issued number of times, one must configure this parameter more than its default value (50).Also increasing this parameter will mean extra memory required for shared pool, so you must increase your shared pool when you use this parameter.

CURSOR_SPACE_FOR_TIME

SQL can be aged out of shared pool in 2 cases:

Page 55: Oracle Important Notes

a) When the cursor is closed: When the cursors are closed by application, they can still be in shared pool, until there comes a request for a new cursor and oracle needs to use LRU algorithm. SESSION_CACHED_CURSORS helps you in pinning (partial because it only pins HEAP 0) when the cursors are closed.

b) When the cursor is open: Oracle requires parsed state of SQL at PARSE and EXECUTE phase. If oracle parses (soft or hard) a statement, there is a likely hood that Oracle may age out your SQL out of shared pool after PARSE state if it requires to accommodate a new SQL coming its way. So in the EXECUTE state, there is a possibility that parsed information is lost and oracle parse it again.

CURSOR_SPACE_FOR_TIME if set to TRUE, ensures that SQL is not aged out before the cursor is closed, so in EXECUTE phase, you will have the PARSE information.

But this is generally a rare case and happens in a very highly active environment because to accommodate a new SQL, Oracle first check the free space and if it doesn’t get, it checks the closed cursors and see if any cursor can be aged out and when there is no space which can be reclaimed, Oracle comes to open cursors which are not EXECUTED.This generally happens when the space of shared pool is too less.

I don’t suggest setting this parameter to TRUE in most of the cases.

There are some other serious tradeoffs also.When a cursor is pinned, it cant be aged out and related memory cannot be freed for any new SQL and if you set this parameter to TRUE, you are telling Oracle to keep all the open cursors pinned and not freeable.If you use this parameter, you are pinning the whole cursor not just the HEAP 0 which is 1k, you are pinning HEAP 0 (1k) + SQL Area (multiple of 4k) which makes shared pool life tough because of space issues.

Conclusion:

As I said, I don’t suggest setting this parameter to TRUE in most of the cases. An alternative to set this parameter is to increase shared pool size or/and check your code on how many numbers of cursors you are opening/closing. That will be a better approach. Setting this parameter is like taking paracetamol without knowing the cause of fever.

Posted by Sachin at 1:28 AM 0 comments    

Labels: Tuning

Tuesday, December 26, 2006Index usage with LIKE operator

I have seen many developers getting confused on index usage with like operator. Few are of the feeling that index will be used and few are against this feeling.

Let’s see this with example:

SQL> create table sac as select * from all_objects;

Table created.

Page 56: Oracle Important Notes

SQL> create index sac_indx on sac(object_type);

Index created.

SQL> set autotrace trace explain

SQL> select * from sac where object_type='TAB%';

Execution Plan----------------------------------------------------------0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=128)

1 0 TABLE ACCESS (BY INDEX ROWID) OF 'SAC' (TABLE) (Cost=1 Card=1 Bytes=128)

2 1 INDEX (RANGE SCAN) OF 'SAC_INDX' (INDEX) (Cost=1 Card=1)

Above example shows that using % wild card character towards end probe an Index search.

But if it is used towards end, it will not be used. And sensibly so, because Oracle doesn’t know which data to search, it can start from ‘A to Z’ or ‘a to z’ or even 1 to any number.

See this.SQL> select * from sac where object_type like '%ABLE';

Execution Plan----------------------------------------------------------0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=148 Card=1004 Bytes=128512)

1 0 TABLE ACCESS (FULL) OF 'SAC' (TABLE) (Cost=148 Card=1004 Bytes=128512)

Now how to use the index if you are using Like operator searches. The answer is Domain Indexes.

See the following example:

SQL> connect / as sysdbaConnected.

SQL> grant execute on ctx_ddl to public;Grant succeeded.

SQL> connect sac/******Connected.SQL> begin2 ctx_ddl.create_preference('SUBSTRING_PREF',3 'BASIC_WORDLIST');4 ctx_ddl.set_attribute('SUBSTRING_PREF',

Page 57: Oracle Important Notes

5 'SUBSTRING_INDEX','TRUE');6 end;78 /PL/SQL procedure successfully completed.

SQL>SQL> drop index sac_indx;Index dropped.

SQL> create index sac_indx on sac(object_type) indextype is ctxsys.context parameters ('wordlist SUBSTRING_PREF memory 50m');Index created.

SQL> set autotrace trace expSQL> select * from sac where contains (OBJECT_TYPE,'%PACK%') > 02 /Execution Plan----------------------------------------------------------0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=8 Card=19 Bytes=1786)1 0 TABLE ACCESS (BY INDEX ROWID) OF 'SAC' (TABLE) (Cost=8 Card=19 Bytes=1786)2 1 DOMAIN INDEX OF 'SAC_INDX' (INDEX (DOMAIN)) (Cost=4)

In this case the index is getting used. Conclusion=============For proximity, soundex and fuzzy searchs, use domain indexes.

Posted by Sachin at 3:59 AM 0 comments    

Labels: Tuning

Thursday, December 21, 2006Top SQL's

Awareness is first step towards resolution.I hope you will agree with me.

One of the important tasks of the DBA is to know what the high CPU consuming processes on database server are.In my last organization, we used get number of request saying that DB server is running slow.Now the problem is that, this server is hosting 86 databases, and finding out which is the culprit process and database sounds a daunting task (but it isn't).

See this:

First find out the top CPU processes on your system:

You may use TOP (or ps aux) or any other utility to find the top cpu consuming process.

Page 58: Oracle Important Notes

Here is a sample top output:

bash-3.00$ topPID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND17480 oracle 11 59 0 1576M 1486M sleep 0:09 23.51% oracle9172 oracle 258 59 2 1576M 1476M sleep 0:43 1.33% oracle9176 oracle 14 59 2 1582M 1472M sleep 0:52 0.43% oracle17550 oracle 1 59 0 3188K 1580K cpu/1 0:00 0.04% top9178 oracle 13 59 2 1571M 1472M sleep 2:30 0.03% oracle

You can see the bold section. Process# 17480 is consuming 23.51 % CPU.

Now this process can belong to any process out of many instances on this server.To find out which instance this process belongs to:

bash-3.00$ ps -ef grep 17480

oracle 17480 17479 0 03:02:03 ? 0:48 oracleorcl (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

The instance name is highlighted in red.

Now you know which instance is holding that session.

Change your environmental settings (ORACLE_SID, ORACLE_HOME) related to this database.and connect to the database as SYSDBA

bash-3.00$ sqlplus "/ as sysdba"

SQL*Plus: Release 10.2.0.2.0 - Production on Thu Dec 21 04:03:44 2006

Copyright (c) 1982, 2005, Oracle. All Rights Reserved.

Connected to:Oracle Database 10g Release 10.2.0.2.0 - Production

SQL> select ses.sid SID,sqa.SQL_TEXT SQL from2 v$session ses, v$sqlarea sqa, v$process proc3 where ses.paddr=proc.addr4 and ses.sql_hash_value=sqa.hash_value5 and proc.spid=17480;

SID SQL--------- -----------------67 delete from test

Now you have the responsible SQL behind 23% CPU using process.In my case it was a deliberate DELETE statement to induct this test but in your case it can be a query worth tuning.

Page 59: Oracle Important Notes

Mostly knowing what is the problem is solution of the problem. (At least you know what the issue is).Issue is to be addressed right away or to be taken to development team is a subjective issue which i don’t want to comment.

Posted by Sachin at 3:00 AM 2 comments    

Labels: Tuning

Tuesday, December 19, 2006CURSOR_SHARING - Do we use it?

Are our cursors in shared pool shared at all?How many DBA’s uses this feature of 9i (introduced in 8i but enhanced in 9i?)?

Actually, lot of us doesn’t use it all. Let’s first understand this feature and implement this in our systems.

CURSOR_SHARING is an init.ora parameter which decides whether a SQL send from user is a candidate for fresh parsing or will use an existing plan.

This parameter has 3 values.

1. CURSOR_SHARING = Exact (Default)

Definition: Share the plan only if text of SQL matches exactly with the text of SQL lying in shared pool

Let’s take an example

SQL> create table test1 (t1 number);

Table created.

SQL> insert into test1 values(1);

1 row created.

SQL> insert into test1 values(2);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from test1 where t1=1;

T1----------1

SQL> select * from test1 where t1=2;

Page 60: Oracle Important Notes

T1----------2

SQL> select sql_text2 from v$sql3 where sql_text like 'select * from test1%'4 order by sql_text;

SQL_TEXT-----------------------------------------------------

select * from test1 where t1=1select * from test1 where t1=2

As you see there were 2 statements in V$sql, so it generated 2 plans. Oracle had to do the same work again to generate the plan even when the difference between the two SQL was just literal value.

2. CURSOR_SHARING = Force (Introduced in 8.1.6)

Definition: Share the plan (forcibly) of a SQL if the text of SQL matches (except the literal values) with text of SQL in shared pool

This means if 2 SQL’s are same except their literal values, share the plan.

Let’s take an example:

I’m using the same table and data which is used in case of above example.

SQL> alter system flush shared_pool;

System altered.

SQL> alter session set cursor_sharing=force;

Session altered.

SQL> select * from test1 where t1=1;

T1----------1

SQL> select * from test1 where t1=2;

T1----------2

SQL> select sql_text

Page 61: Oracle Important Notes

2 from v$sql3 where sql_text like 'select * from test1%'4 order by sql_text;

SQL_TEXT---------------------------------------------------

select * from test1 where t1=:"SYS_B_0"

You can see for both the statements there was only one entry in V$sql. This means for second occurrence, oracle did not generate a new plan.

This not only helps in savings DB server engine time for generating the plan but also helps in reducing the number of plans shared pool can hold.

Important note:

Cursor_sharing = force can have some flip behavior as well, so you must be careful to use this. Using this we are forcing oracle to use the same plan for 2(or more) SQL’s even when using the same plan may not be good for similar SQL’s.

Example: “where t1=2” may be a good candidate for index scan while “where t1=10” should use a full table scan because 90% of the rows in the table has t1=10 (assumption).

3. CURSOR_SHARING = SIMILAR (Introduced in 9i)

This is the tricky one, but most used.

Definition: SIMILAR causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the meaning of the statement or the degree to which the plan is optimized. (Source: Oracle documentation)

Let’s understand this.Re-quoting the example above > “where t1=2” may be a good candidate for index scan while “where t1=10” should use a full table scan because 90% of the rows in the table has t1=10 (assumption).

To avoid 2 statements using the same plan when the same plan is not good for one of them, we have cursor_sharing=similar

Let’s take an example:

SQL> alter system flush shared_pool;

System altered.

SQL> drop table test1;

Page 62: Oracle Important Notes

Table dropped.

SQL> create table test1 (t1 number,t2 number);

Table created.

SQL>1 begin2 for i in 1 .. 100 loop3 insert into test1 values(1,i);4 end loop;5 commit;6 update test1 set t1=2 where rownum <> /

PL/SQL procedure successfully completed.

In this case t1 has value “2” in first row and “1” in rest 99 rowsSQL> create index tt_indx on test1(t1);

Index created.

SQL> alter session set cursor_sharing=similar;

Session altered.

SQL> select * from test1 where t1=2;

1 row selected.

SQL> select * from test1 where t1=1;

99 rows selected.

SQL> select sql_text2 from v$sql3 where sql_text like 'select * from test1%'4 order by sql_text;

SQL_TEXT----------------------------------------------------

select * from test1 where t1=:"SYS_B_0"select * from test1 where t1=:"SYS_B_0"

This tells us that even though the 2 statements were similar, Oracle opted for a different plan. Now even if you put t1=30 (0 rows), Oracle will create another plan.

SQL> select * from test1 where t1=30; -- (0 rows)

SQL> select sql_text2 from v$sql3 where sql_text like 'select * from test1%'

Page 63: Oracle Important Notes

4 order by sql_text;

SQL_TEXT---------------------------------------------------

select * from test1 where t1=:"SYS_B_0"select * from test1 where t1=:"SYS_B_0"select * from test1 where t1=:"SYS_B_0"

This is because the first time when the SQL ran, oracle engine found the literal value as “unsafe” because using the same literal value can cause bad plans for other similar SQL’s. So along with the PLAN, optimizer stored the literal value also. This will ensure the reusability of the plan only in case the same lieteral is provided. In case of any change, optimizer will generate a new plan.

But this doesn’t mean that SIMILAR and EXACT are same.

See this:

SQL> alter system flush shared_pool;

System altered.

SQL> select * from test1 where t1=2 and t1=22;

no rows selected

SQL> select * from test1 where t1=2 and t1=23;

no rows selected

SQL> select sql_text2 from v$sql3 where sql_text like 'select * from test1%'4 order by sql_text;

SQL_TEXT--------------------------------------------------------------

select * from test1 where t1=:"SYS_B_0" and t1=:"SYS_B_1"

Optimizer used single plan for both.

Conclusions:

1. Use CURSOR_SHARING=similar only when you have library cache misses and/or most of the SQL statements differ only in literal values

2. CURSOR_SHARING=force/similar significantly reduces the number of plans in shared pool

Note:

Page 64: Oracle Important Notes

1. Oracle does not recommend setting CURSOR_SHARING to FORCE in a DSS environment or if you are using complex queries. Also, star transformation is not supported with CURSOR_SHARING set to either SIMILAR or FORCE

2. Setting CURSOR_SHARING to SIMILAR or FORCE causes an increase in the maximum lengths (as returned by DESCRIBE) of any selected expressions that contain literals (in a SELECT statement). However, the actual length of the data returned does not change.

Posted by Sachin at 4:09 AM 0 comments    

Labels: Tuning

Thursday, November 30, 2006

Precaution while defining data types in Oracle

I was reading a wonderful article on Tom Kyte’s blog on repercussion of ill defined data types.

Some of the example he mentions is:

- Varchar2(40) Vs Varchar2(4000)- Date Vs varchar2

Varchar2(40) Vs Varchar2(4000)

Generally developers ask for this to avoid issues in the application. They always want the uppermost limit to avoid any application errors. The fact on which they argue is “varchar2” data type will not reserve 4000 character space so disk space is not an issue. But what they don’t know is how costly are these.

Repercussions

- Generally application does an “array fetch” from the database i.e. they select 100 (may be more) rows in one go. So if you are selecting 10 varchar2 cols. Then effective RAM (not storage) usage will be 4000(char) x 10 (cols) x 100 (rows) = 4 MB of RAM. On contrary, had this column defined with 40 char size the usage would have been 40 x 10 x 100 ~ 40KB (approx)!! See the difference; also we didn’t multiply the “number of session”. That could be another shock!!

- Later on, it will be difficult to know for what the column was made. Ex: for first_name, if you define varchar2 (40000), it’s confusing for a new developer to know for what this column was made.

Date Vs varchar2

Again lots of developers define date cols as varchar2 (or char) for their convenience. But what they forget is not only data integrity (a date could be 01/01/03 .. what was dd,mm,yy .. then u don’t know what did you defined) but also performance.

While doing “Index range scans” by using “ between and ”Optimizer will not be able to use the index as efficiently as in case of:

Page 65: Oracle Important Notes

“ between and ”

I suggest you to read the full article by maestro himself.

Posted by Sachin at 4:43 AM 2 comments    

Labels: Tuning

Tuesday, November 28, 2006Oracle table compression

Compress your tables

This feature has been introduced in 9i rel 2 and is most useful in a warehouse environment (for fact tables).

How to compress? Simple

SQL> alter table test move compress;

Table altered.

How Oracle implements compression?

Oracle compress data by eliminating duplicate values within a data-block. Any repetitive occurrence of a value in a block is replaced by a symbol entry in a “symbol table” within the data block. So for example deptno=10 is repeated 5 times within a data block, it will be only stored once and rest 4 times a symbol entry will be stored in symbol table.Its very important to know that every data block is self contained and sufficient to rebuild the uncompressed form of data.

Table compression can significantly reduce disk and buffer cache requirements for database tables while improving query performance. Compressed tables use fewer data blocks on disk, reducing disk space requirements.

Identifying tables to compress:

First create the following function which will get you the extent of compression

create function compression_ratio (tabname varchar2)return number is — sample percentagepct number := 0.000099;blkcnt number := 0; blkcntc number; beginexecute immediate ' create table TEMP$$FOR_TEST pctfree 0as select * from ' tabname ' where rownum < 1';while ((pct < 100) and (blkcnt < 1000)) loopexecute immediate 'truncate table TEMP$$FOR_TEST';execute immediate 'insert into TEMP$$FOR_TEST select *from ' tabname ' sample block (' pct ',10)';execute immediate 'selectcount(distinct(dbms_rowid.rowid_block_number(rowid)))from TEMP$$FOR_TEST' into blkcnt;

Page 66: Oracle Important Notes

pct := pct * 10;end loop;execute immediate 'alter table TEMP$$FOR_TEST move compress ';execute immediate 'selectcount(distinct(dbms_rowid.rowid_block_number(rowid)))from TEMP$$FOR_TEST' into blkcntc;execute immediate 'drop table TEMP$$FOR_TEST';return (blkcnt/blkcntc);end;/

1 declare2 a number;3 begin4 a:=compression_ratio('TEST');5 dbms_output.put_line(a);6 end7 ;8 /

2.91389728096676737160120845921450151057

PL/SQL procedure successfully completed.

1 select bytes/1024/1024 "Size in MB" from user_segments2* where segment_name='TEST'SQL> /

Size in MB----------18

SQL> alter table test move compress;

Table altered.

SQL> select bytes/1024/1024 "Size in MB" from user_segments2 where segment_name='TEST';

Size in MB----------6

After compressing the table, you need to rebuild indexes because the rowid's have changed.

Notes:

- This feature can be best utilized in a warehouse environment where there are lot of duplicate values (for fact tables). Infact a larger block size is more efficient, becuase duplicate values will be only stored once within a block.

- This feature has no -ve effect, infact it accelerates the performance of queries

Page 67: Oracle Important Notes

accessing large amount of data.

- I suggest you to read the following white paper by Oracle which explains the whole algorithm in details along with industry recognized TPC test cases.

http://www.vldb.org/conf/2003/papers/S28P01.pdf

I wrote the above article after reading the oramag. I suggest you to read the full article on Oracle site

Posted by Sachin at 3:38 AM 2 comments    

Labels: Tuning

The Clustering factor

The Clustering Factor

The clustering factor is a number which represent the degree to which data is randomly distributed in a table.

In simple terms it is the number of “block switches” while reading a table using an index.

Figure: Bad clustering factor

The above diagram explains that how scatter the rows of the table are. The first index entry (from left of index) points to the first data block and second index entry points to second data block. So while making index range scan or full index scan, optimizer have to switch between blocks and have to revisit the same block more than once because rows are scatter. So the number of times optimizer will make these switches is actually termed as “Clustering factor”.

Page 68: Oracle Important Notes

Figure: Good clustering factor

The above image represents "Good CF”. In an event of index range scan, optimizer will not have to jump to next data block as most of the index entries points to same data block. This helps significantly in reducing the cost of your SELECT statements.

Clustering factor is stored in data dictionary and can be viewed from dba_indexes (or user_indexes)

SQL> create table sac as select * from all_objects;

Table created.

SQL> create index obj_id_indx on sac(object_id);

Index created.

SQL> select clustering_factor from user_indexes where index_name='OBJ_ID_INDX';

CLUSTERING_FACTOR-----------------545

SQL> select count(*) from sac;

COUNT(*)----------38956

SQL> select blocks from user_segments where segment_name='OBJ_ID_INDX';

BLOCKS----------96

The above example shows that index has to jump 545 times to give you the full data had you performed full table scan using the index.

Note:- A good CF is equal (or near) to the values of number of blocks of table.

- A bad CF is equal (or near) to the number of rows of table.

Myth:- Rebuilding of index can improve the CF.

Page 69: Oracle Important Notes

Then how to improve the CF?

- To improve the CF, it’s the table that must be rebuilt (and reordered).- If table has multiple indexes, careful consideration needs to be given by which index to order table.

Important point: The above is my interpretation of the subject after reading the book on Optimizer of Jonathan Lewis.

Posted by Sachin at 12:39 AM 0 comments    

Labels: Tuning

Thursday, November 23, 2006Bad SQL design

Important point

If the statement is designed poorly, nothing much can be done by optimizer or indexes

Few known thumb rules

–Avoid Cartesian joins

–Use UNION ALL instead of UNION – if possible

–Use EXIST clause instead of IN - (Wherever appropiate)

–Use order by when you really require it – Its very costly

–When joining 2 views that themselves select from other views, check that the 2 views that you are using do not join the same tables!

–Avoid NOT in or NOT = on indexed columns. They prevent the optimizer from using indexes. Use where amount > 0 instead of where amount != 0

- Avoid writing where is not null. nulls can prevent the optimizer from using an index

- Avoid calculations on indexed columns. Write WHERE amount > 26000/3 instead of WHERE approved_amt/3 > 26000

- The query below will return any record where bmm_code = cORE, Core, CORE, COre, etc.

select appl_appl_id where upper(bmm_code) LIKE 'CORE%'

But this query can be very inefficient as it results in a full table scan. It cannot make use of the index on bmm_code.

Page 70: Oracle Important Notes

Instead, write it like this:

select appl_appl_id from nci_appl_elements_t where (bmm_code like 'C%' or bmm_code like 'c%') and upper(bmm_code) LIKE 'CORE%'

This results in Index Range Scan.

You can also make this more efficient by using 2 characters instead of just one:

where ((bmm_code like 'CO%' or bmm_code like 'Co%' or bmm_code like 'cO%' or bmm_code like 'co%') and upper(bmm_code) LIKE 'CORE%')

Inviting Experts

Friends, feel free to correct me. I will appreciate if you can add your comments also.

Posted by Sachin at 11:24 PM 0 comments    

Labels: Tuning

Consider declaring NOT NULL columns

Consider declaring NOT NULL columns

People sometimes do not bother to define columns as NOT NULL in the data dictionary, even though these columns should not contain nulls, and indeed never do contain nulls because the application ensures that a value is always supplied. You may think that this is a matter of indifference, but it is not. The optimizer sometimes needs to know that a column is not nullable, and without that knowledge it is constrained to choose a less than optimal execution plan.

1. An index on a nullable column cannot be used to drive access to a table unless the query contains one or more predicates against that column that exclude null values.

Of course, it is not normally desirable to use an index based access path unless the query contains such predicates, but there are important exceptions.

For example, if a full table scan would otherwise be required against the table and the query can be satisfied by a fast full scan (scan for which table data need not be read) against the index, then the latter plan will normally prove more efficient.

Test-case for the above reasoning

SQL> create index color_indx on automobile(color);

Index created.

SQL> select distinct color,count(*) from automobile group by color;

Execution Plan----------------------------------------------------------

Page 71: Oracle Important Notes

0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1046 Bytes=54392)

1 0 SORT (GROUP BY) (Cost=4 Card=1046 Bytes=54392)2 1 TABLE ACCESS (FULL) OF 'AUTOMOBILE' (TABLE) (Cost=3 Card=1046 Bytes=54392)

SQL> alter table automobile modify color not null;

Table altered.

SQL> select distinct color,count(*) from automobile group by color;

Execution Plan----------------------------------------------------------0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1046 Bytes=54392)

1 0 SORT (GROUP BY) (Cost=4 Card=1046 Bytes=54392)2 1 INDEX (FAST FULL SCAN) OF 'COLOR_INDX' (INDEX) (Cost=3 Card=1046 Bytes=54392)

2. If you are calling a sub-query in a parent query using the NOT IN predicate, the indexing on column (in where clause of parent query) will not be used.

Because as per optimizer, results of parent query needs to be displayed only when there is no equi-matching from sub-query, And if the sub-query can potentially contain NULL value (UNKNOWN, incomparable), parent query will have no value to compare with NULL value, so it will not use the INDEX.

Test-case for the above Reasoning

SQL> create index sal_indx on emp(sal);

Index created.

SQL> create index ename_indx on emp(ename);

Index created.

SQL> select * from emp where sal not in (select sal from emp where ename='JONES');

Execution Plan----------------------------------------------------------0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=17 Card=13 Bytes=481)

1 0 FILTER2 1 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Byt

Page 72: Oracle Important Notes

es=518) –> you can see a full table scan even when index exist on SAL

3 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=2 Card=1 Bytes=10)

4 3 INDEX (RANGE SCAN) OF 'ENAME_INDX' (INDEX) (Cost=1 Card=1)

SQL> alter table emp modify sal not null;

Table altered.

SQL> select * from emp where sal not in (select sal from emp where ename='JONES');

Execution Plan----------------------------------------------------------0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=5 Card=12 Bytes=564)

1 0 MERGE JOIN (ANTI) (Cost=5 Card=12 Bytes=564)2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=2 Card=14 Bytes=518)

3 2 INDEX (FULL SCAN) OF 'SAL_INDX' (INDEX) (Cost=1 Card=14) -> Here you go, your index getting used now

4 1 SORT (UNIQUE) (Cost=3 Card=1 Bytes=10)5 4 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (TABLE) (Cost=2Card=1 Bytes=10)

6 5 INDEX (RANGE SCAN) OF 'ENAME_INDX' (INDEX) (Cost=1 Card=1)

The above article has been an inspiration after reading an article on ixora . The article was missing some of the testcases, so I thought of adding few for newbiews to relate to it.

Posted by Sachin at 10:22 PM 0 comments    

Labels: Tuning

IN Vs Exist in SQL IN Vs EXISTS

The two are processed quite differently.

IN Clause

Select * from T1 where x in ( select y from T2 )

Page 73: Oracle Important Notes

is typically processed as:

select *from t1, ( select distinct y from t2 ) t2where t1.x = t2.y;

The sub query is evaluated, distinct, indexed and thenjoined to the original table -- typically.

As opposed to "EXIST" clause

select * from t1 where exists ( select null from t2 where y = x )

That is processed more like:

for x in ( select * from t1 )loopif ( exists ( select null from t2 where y = x.x )thenOUTPUT THE RECORDend ifend loop

It always results in a full scan of T1 whereas the first query can make use ofan index on T1(x).

So, when is exists appropriate and in appropriate?

Lets say the result of the subquery( select y from T2 )

is "huge" and takes a long time. But the table T1 is relatively small andexecuting ( select null from t2 where y = x.x ) is fast (nice index ont2(y)). Then the exists will be faster as the time to full scan T1 and do theindex probe into T2 could be less then the time to simply full scan T2 to buildthe subquery we need to distinct on.

Lets say the result of the subquery is small -- then IN is typicaly moreappropriate.

If both the subquery and the outer table are huge -- either might work as wellas the other -- depends on the indexes and other factors.

Page 74: Oracle Important Notes

Nested loops, Has1h join and Sort Merge joins – difference?

Nested loop (loop over loop)

In this algorithm, an outer loop is formed which consists of few entries and then for each entry, and inner loop is processed.

Ex:

Select tab1.*, tab2.* from tabl, tab2 where tabl.col1=tab2.col2;

It is processed like:

For i in (select * from tab1) loopFor j in (select * from tab2 where col2=i.col1) loopDisplay results;End loop;End loop;

The Steps involved in doing nested loop are:

a) <!--[endif]-->Identify outer (driving) table

<!--[if !supportLists]--> b) <!--[endif]-->Assign inner (driven) table to outer table.

<!--[if !supportLists]--> c) <!--[endif]-->For every row of outer table, access the rows of inner table.

In execution plan it is seen like this:

NESTED LOOPSouter_loopinner_loop

When optimizer uses nested loops?

Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving condition. It is important to have an index on column of inner join table as this table is probed every time for a new value from outer table.

Optimizer may not use nested loop in case:

<!--[endif]--> 1. No of rows of both the table is quite high 2. Inner query always results in same set of records 3. The access path of inner table is independent of data coming from outer table.

<!--[endif]-->

Page 75: Oracle Important Notes

Note: You will see more use of nested loop when using FIRST_ROWS optimizer mode as it works on model of showing instantaneous results to user as they are fetched. There is no need for selecting caching any data before it is returned to user. In case of hash join it is needed and is explained below.

Hash join

Hash joins are used when the joining large tables. The optimizer uses smaller of the 2 tables to build a hash table in memory and the scans the large tables and compares the hash value (of rows from large table) with this hash table to find the joined rows.

The algorithm of hash join is divided in two parts

<!--[if !supportLists]--><!--[endif]--> 1. Build a in-memory hash table on smaller of the two tables. 2. Probe this hash table with hash value for each row second table

<!--[endif]-->

In simpler terms it works like

Build phase

For each row RW1 in small (left/build) table loopCalculate hash value on RW1 join keyInsert RW1 in appropriate hash bucket.End loop;

Probe Phase

For each row RW2 in big (right/probe) table loopCalculate the hash value on RW2 join keyFor each row RW1 in hash table loopIf RW1 joins with RW2Return RW1, RW2End loop;End loop;

When optimizer uses hash join?

Optimizer uses has join while joining big tables or big fraction of small tables.

Unlike nested loop, the output of hash join result is not instantaneous as hash joining is blocked on building up hash table.

Note: You may see more hash joins used with ALL_ROWS optimizer mode, because it works on model of showing results after all the rows of at least one of the tables are hashed in hash table.

Sort merge join

Page 76: Oracle Important Notes

Sort merge join is used to join two independent data sources. They perform better than nested loop when the volume of data is big in tables but not as good as hash joins in general.

They perform better than hash join when the join condition columns are already sorted or there is no sorting required.

The full operation is done in two parts:

<!--[if !supportLists]-->Sort join operation

get first row RW1 from input1get first row RW2 from input2.

Merge join operation

while not at the end of either input loopif RW1 joins with RW2get next row R2 from input 2return (RW1, RW2)else if RW1 < style=""> get next row RW1 from input 1elseget next row RW2 from input 2end loop

Note: If the data is already sorted, first step is avoided.

Important point to understand is, unlike nested loop where driven (inner) table is read as many number of times as the input from outer table, in sort merge join each of the tables involved are accessed at most once. So they prove to be better than nested loop when the data set is large.

When optimizer uses Sort merge join?

<!--[if !supportLists]-->a) <!--[endif]-->When the join condition is an inequality condition (like <, <=, >=). This is because hash join cannot be used for inequality conditions and if the data set is large, nested loop is definitely not an option.

<!--[if !supportLists]-->b) <!--[endif]-->If sorting is anyways required due to some other attribute (other than join) like “order by”, optimizer prefers sort merge join over hash join as it is cheaper.

Note: Sort merge join can be seen with both ALL_ROWS and FIRST_ROWS optimizer hint because it works on a model of first sorting both the data sources and then start returning the results. So if the data set is large and you have FIRST_ROWS as optimizer goal, optimizer may prefer sort merge join over nested loop because of large data. And if you have ALL_ROWS as optimizer goal and if any inequality condition is used the SQL, optimizer may use sort-merge join over hash join

Page 77: Oracle Important Notes

Posted by Sachin at 6:47 AM    

Labels: CBO, Tuning

3 comments:

Sachin said...

I wanted to put some examples in the post itself, but missed it earlier.

Here it is:

SQL> conn scott/*****Connected.SQL> create table e as select * from emp;

Table created.

SQL> create table d as select * from dept;

Table created.

SQL> create index e_deptno on e(deptno);

Index created.

Gather D stats as it is

SQL> exec dbms_stats.gather_table_stats('SCOTT','D') PL/SQL procedure successfully completed.

Set artificial stats for E:

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'E', numrows => 100, numblks => 100, avgrlen => 124);

PL/SQL procedure successfully completed.

Set artificial stats for E_DEPTNO index

SQL> exec dbms_stats.set_index_stats(ownname => 'SCOTT', indname => 'E_DEPTNO', numrows => 100, numlblks => 10);

PL/SQL procedure successfully completed.

Check out the plan:

A) With less number of rows(100 in E), you will see Nested loop getting used.

SQL> select e.ename,d.dname from e, d where e.deptno=d.deptno;

Page 78: Oracle Important Notes

Execution Plan----------------------------------------------------------Plan hash value: 3204653704

----------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |----------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 100 | 2200 | 6 (0)| 00:00:01 || 1 | TABLE ACCESS BY INDEX ROWID| E | 25 | 225 | 1 (0)| 00:00:01 || 2 | NESTED LOOPS | | 100 | 2200 | 6 (0)| 00:00:01 || 3 | TABLE ACCESS FULL | D | 4 | 52 | 3 (0)| 00:00:01 ||* 4 | INDEX RANGE SCAN | E_DEPTNO | 33 | | 0 (0)| 00:00:01 |----------------------------------------------------------------------------------------

B) Let us set some more artificial stats to see which plans is getting used:

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'E', numrows => 1000000, numblks => 10000, avgrlen => 124);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_index_stats(ownname => 'SCOTT', indname => 'E_DEPTNO', numrows => 1000000, numlblks => 1000);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'D', numrows => 1000000,numblks => 10000 , avgrlen => 124);

PL/SQL procedure successfully completed.

Now we have 1000000 number of rows in E and D table both and index on E(DEPTNO) reflects the same.

Plans changes !!

SQL> select e.ename,d.dname from e, d where e.deptno=d.deptno;

Execution Plan----------------------------------------------------------Plan hash value: 51064926

-----------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |-----------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 250G| 5122G| | 3968K(100)| 13:13:45 ||* 1 | HASH JOIN | | 250G| 5122G| 20M| 3968K(100)| 13:13:45 || 2 | TABLE ACCESS FULL| E | 1000K| 8789K| | 2246 (3)| 00:00:27 || 3 | TABLE ACCESS FULL| D | 1000K| 12M| | 2227 (2)| 00:00:27 |-----------------------------------------------------------------------------------

Page 79: Oracle Important Notes

C) Now to test MERGE JOIN, we set moderate number of rows and do some ordering business.

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'E', numrows => 10000, numblks => 1000, avgrlen => 124);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_index_stats(ownname => 'SCOTT', indname => 'E_DEPTNO', numrows => 10000, numlblks => 100);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'D', numrows => 1000, numblks => 100, avgrlen => 124);

PL/SQL procedure successfully completed.

SQL> select e.ename,d.dname from e, d where e.deptno=d.deptno order by e.deptno;

Execution Plan----------------------------------------------------------Plan hash value: 915894881

-----------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |-----------------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 2500K| 52M| 167 (26)| 00:00:02 || 1 | MERGE JOIN | | 2500K| 52M| 167 (26)| 00:00:02 || 2 | TABLE ACCESS BY INDEX ROWID| E | 10000 | 90000 | 102 (1)| 00:00:02 || 3 | INDEX FULL SCAN | E_DEPTNO | 10000 | | 100 (0)| 00:00:02 ||* 4 | SORT JOIN | | 1000 | 13000 | 25 (4)| 00:00:01 || 5 | TABLE ACCESS FULL | D | 1000 | 13000 | 24 (0)| 00:00:01 |-----------------------------------------------------------------------------------------

DATABASE USGAE REPORT

pages 200

col Mb form 9,999,9999

break on report on owner skip 1compute sum of Mb on report compute sum of Mb on owner

select rownum as rank, a.*from (   select owner, tablespace_name, sum(bytes)/1024/1024 Mb   from dba_segments   where owner not in ('SYS','SYSTEM')

Page 80: Oracle Important Notes

   group by owner,tablespace_name   order by 3 desc ) awhere rownum < 11/

clear breaksclear computes

break on report on tablespace_name skip 1

compute sum of Mb on report compute sum of Mb on tablespace_name

select rownum as rank, a.*from (   select tablespace_name, owner, sum(bytes)/1024/1024 Mb   from dba_segments   where owner not in ('SYS','SYSTEM')   group by tablespace_name, owner   order by 3) awhere rownum < 11/

Script for DBA High Water Mark Statistics

   SELECT hwm1.name,               hwm1.highwater,              hwm1.last_value     FROM dba_high_water_mark_statistics hwm1   WHERE hwm1.version = (SELECT MAX(hwm2.version)                                        FROM dba_high_water_mark_statistics hwm2                                      WHERE hwm2.name = hwm1.name)                                      ORDER BY hwm1.name;  

 Dynamic Memory:

     SELECT name,        value       FROM v$parameter     WHERE SUBSTR(name, 1, 1) = '_'     ORDER BY name;

Check the results.   Job Programs?

   SELECT owner,               program_name,              program_type,              program_action,              number_of_arguments,              enabled,              comments

Page 81: Oracle Important Notes

     FROM dba_scheduler_programs   ORDER BY owner, program_name;

Next Scripts will be available soon  

Script to see the Current wait for all active sessions SELECT NVL(a.username, '(oracle)') AS username, a.osuser,a.sid,a.serial#,d.spid AS process_id,a.wait_class,a.seconds_in_wait,a.state,a.blocking_session,a.blocking_session_status,a.module,TO_CHAR(a.logon_Time,'DD-MON-YYYY HH24:MI:SS') AS logon_timeFROM v$session a, v$process dWHERE a.paddr = d.addrAND a.status = 'ACTIVE'ORDER BY 1,2;

View the result How to Displays scheduler information about windows:

SELECT window_name, resource_plan, enabled, active, comments FROM dba_scheduler_windows ORDER BY window_name;

Check the results. How to Display the status of buffers in the SGA

SELECT t.name AS tablespace_name, o.object_name, SUM(DECODE(bh.status, 'free', 1, 0)) AS free,SUM(DECODE(bh.status, 'xcur', 1, 0)) AS xcur,SUM(DECODE(bh.status, 'scur', 1, 0)) AS scur, SUM(DECODE(bh.status, 'cr', 1, 0)) AS cr,SUM(DECODE(bh.status, 'read', 1, 0)) AS read,SUM(DECODE(bh.status, 'mrec', 1, 0)) AS mrec,SUM(DECODE(bh.status, 'irec', 1, 0)) AS irecFROM v$bh bhJOIN dba_objects o ON o.object_id = bh.objdJOIN v$tablespace t ON t.ts# = bh.ts#GROUP BY t.name, o.object_name;

Page 82: Oracle Important Notes

SQL> SET LINESIZE 200 SQL> COLUMN name FORMAT A30 SQL> COLUMN network_name FORMAT A50

   SQL> SELECT name, network_name              FROM dba_services           ORDER BY name;

View the result on you sql window

NAME                                                              NETWORK_NAME --------------------------------------------------- ------------------------------------SYS$BACKGROUNDSYS$USERShaf                                                              hafhafXDB                                                        hafXDBseeddata.regress.rdbms.dev.us.oracle.com      seeddata.regress.rdbms.dev.us.oracle.comseeddataXDB                                                seeddataXDB

 

 Script to Displays comments associated with specific table:

     SQL> SELECT table_name,,                         comments                FROM dba_tab_comments              WHERE owner = UPPER('&P1')                  AND table_name LIKE UPPER('%&P2%')              ORDER BY table_name;

Check the results.   Miscellaneous queries to check the general health of the system:

SQL> SELECT file_id, tablespace_name, file_name, status FROM sys.dba_data_files;

SQL> SELECT file#, name, status, enabled FROM v$datafile;

SQL> SELECT * FROM v$backup;

SQL> SELECT * FROM v$recovery_status;

SQL> SELECT * FROM v$recover_file;

SQL> SELECT * FROM v$recovery_file_status;

SQL> SELECT * FROM v$recovery_log;

SQL> SELECT username, command, status, module FROM v$session;

Check the out puta of all queries.

Page 83: Oracle Important Notes

 

Some General Scripts:

SQL> SELECT LOWER(USER || '@' || instance_name) X FROM v$instance;

It will display the current connected user with instance name, in my case it is displayed as

ddo@haf

SQL> ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'; SQL> ALTER SESSION SET NLS_TIMESTAMP_FORMAT='DD-MON-YYYY HH24:MI:SS.FF'; SQL> SET TERMOUT ON SQL> SET FEEDBACK ON

SQL> Select Sysdate from dual ;

out put : 17-NOV-2007 23:29:00

 

 Script to Display information on all database sessions:

     SQL> SET LINESIZE 500      SQL> SET PAGESIZE 1000

     SQL> COLUMN username FORMAT A15     SQL> COLUMN machine FORMAT A25     SQL> COLUMN logon_time FORMAT A20

SQL> SELECT LPAD(' ', (level-1)*2, ' ') || NVL(s.username, '(oracle)') AS username,      s.osuser,     s.sid,     s.serial#,     s.lockwait,     s.status,     s.module,     s.machine,     s.program,TO_CHAR(s.logon_Time,'DD-MON-YYYY HH24:MI:SS') AS logon_time    FROM v$session s    CONNECT BY PRIOR s.sid = s.blocking_session    START WITH s.blocking_session IS NULL;

Check the results.   Displays scheduler information for running jobs:

SQL> SELECT owner, job_name, running_instance, elapsed_time FROM dba_scheduler_running_jobs ORDER BY owner, job_name;

Page 84: Oracle Important Notes

Next Scripts will be available soon

Page: 1 2

Generating CBO statistics:

Many database administrators use the old ‘ analyze table' and ‘ dbms_utility' methods in order to generate CBO statistics. However these methods are obsolete and somewhat dangerous to SQL performance.  This is because the cost-based SQL Optimizer (CBO) relies on the quality of the statistics to choose the best execution plan for all SQL statements. 

dbms_stats:

The dbms_stats utility does a far better job in estimating the CBO statistics. Especially it works good for large partitioned tables, and the better stats results in faster SQL execution plans.

How does dbms_stats work?

Below is a sample execution of dbms_stats with the options clause

exec dbms_stats.gather_schema_stats( -

     ownname          => 'SCOTT', -

     estimate_percent => dbms_stats.auto_sample_size, -

     method_opt       => 'for all columns size repeat', -

     degree           => 34 -

   )

options clause:

When the options clause is specified you may specify GATHER options.  When GATHER AUTO is specified, the only additional valid parameters are ownname, stattab, statid, objlist and statown; all other parameter settings are ignored.

exec dbms_stats.gather_schema_stats( -

     ownname          => 'SCOTT', -

     options          => 'GATHER AUTO'

   )

Page 85: Oracle Important Notes

There are several values for the options parameter

gather Re-analyzes the whole schema.

gather empty

Only analyze tables that have no existing statistics.

gather stale

Only re-analyze tables with more than 10% modifications (inserts, updates, deletes. gather stale require monitoring. 

gather auto

This will re-analyze objects which currently have no statistics and objects with stale statistics.   Using gather auto is like combining gather stale and gather empty . Both gather stale and gather auto require monitoring. 

Page: 1 2

alter table xxx monitoring command:

If you issue the alter table xxx monitoring command, Oracle tracks changed tables with the dba_tab_modifications view.  Below we see that the exact number of inserts, updates and deletes are tracked since the last analysis of statistics.

SQL> desc dba_tab_modifications;

 Name                               Type  ------------------------------------------------------  TABLE_OWNER              VARCHAR2(30)  TABLE_NAME                  VARCHAR2(30)  PARTITION_NAME          VARCHAR2(30)  SUBPARTITION_NAME  VARCHAR2(30)  INSERTS                           NUMBER  UPDATES                          NUMBER  DELETES                          NUMBER  TIMESTAMP                      DATE  TRUNCATED                    VARCHAR2(3)

gather stale option:

Because all statistics will become stale quickly in a robust OLTP database, we must remember the rule for gather stale is > 10% row change (based on num_rows at statistics collection time). 

Page 86: Oracle Important Notes

Almost every table except read-only tables will be re-analyzed with the gather stale option. Hence, the gather stale option is best for systems that are largely read-only.  For example, if only 5% of the database tables get significant updates, then only 5% of the tables will be re-analyzed with the “gather stale” option.

CASCADE option:

When analyzing specific tables, the cascade option can be used to analyze all related objects based on foreign-key constraints.  For example, stats$snapshot has foreign key referential integrity into all subordinate tables ( stats$sysstat , etc.), so a single analyze can invoke an analyze of all subordinate tables:

exec  dbms_stats.gather_table_stats( -

      ownname          => 'PERFSTAT', -

      tabname          => 'STATS$SNAPSHOT' -

      estimate_percent => dbms_stats.auto_sample_size, -

      method_opt       => 'for all columns size skewonly', -

      cascade          => true, -

      degree           => 7 -

   )

DEGREE Option:

CBO does full-table and full-index scans and hence you can also parallelize the collection of statistics.  When you set degree=x , Oracle will invoke parallel query slave processes to speed up table access.  Degree is usually about equal to the number of CPUs, minus 1 (for the OPQ query coordinator).

estimate_percent argument:

You can specify the sample size for dbms_stats .  The estimate_percent argument allow Oracle's dbms_stats to automatically estimate the best percentage of a segment to sample when gathering statistics:

      estimate_percent => dbms_stats. auto_sample_size

dba_tables sample_size column:

You can verify the accuracy of the automatic statistics sampling by looking at the dba_tables sample_size column.  Oracle chooses between 5% to 20% for a sample_size when using automatic sampling.

Page 87: Oracle Important Notes

 =================================================

In previous Oracle releases, tables could be made to appear read-only to other users by only granting the SELECT object privilege to them, but the tables remained read-write for the owner. Oracle 11g allows tables to be marked as read-only using the ALTER TABLE command.

SQL> ALTER TABLE table_name READ ONLY; SQL> ALTER TABLE table_name READ WRITE;

The following script creates a table, inserts a row, then sets the table to read-only.

SQl> CREATE TABLE test ( id NUMBER); SQL> INSERT INTO test VALUES (1);SQL> ALTER TABLE test READ ONLY;

Any DML statements that affect the table data and SELECT ... FOR UPDATE queries result in an ORA-12081 error message.

SQL> INSERT INTO test VALUES (2);

ERROR at line 1:ORA-12081: update operation not allowed on table "DDO"."Test"

SQL> UPDATE test SET id = 2;ERROR at line 1:ORA-12081: update operation not allowed on table "DDO"."Test"

SQL> DELETE FROM test;ERROR at line 1:ORA-12081: update operation not allowed on table "ddo"."test"

 

DDL statements that affect the table data are also restricted.SQL> TRUNCATE TABLE test;

ERROR at line 1:ORA-12081: update operation not allowed on table "ddo"."test"

SQL> ALTER TABLE test ADD (description VARCHAR2(50));

ERROR at line 1:ORA-12081: update operation not allowed on table "ddo"."test"

Operations on indexes associated with the table are unaffected by the read-only state. DML and DDL operations return to normal once the table is switched back to read-write mode.

SQL> ALTER TABLE test READ WRITE;

Table altered.

Page 88: Oracle Important Notes

SQL> DELETE FROM test;1 row deleted.

The read-only status of tables is displayed in the READ_ONLY column of the following views (DBA_TABLES, ALL_TABLES, USER_TABLES).

Taking the Guesswork Out of Index Space Requirements Now let's say you are about to embark on the task of creating an index on a table, or perhaps a set of tables. Besides the usual structural elements such as columns and uniqueness, the most important thing you have to consider is the expected size of the index—you must ensure that the tablespace has enough space to hold the new index. With Oracle9i Database and below, many DBAs use tools ranging from spreadsheets to standalone programs to estimate the size of the future index. In 10g, this task has become extremely trivial through the use of the DBMS_SPACE package. Let's see it in action. We are asked to create an index on the columns booking_id and cust_name of the table BOOKINGS. How much space does the proposed index need? All you do is execute the following PL/SQL script. declare   l_used_bytes number;   l_alloc_bytes number;begin   dbms_space.create_index_cost (      ddl => 'create index in_bookings_hist_01 on bookings_hist '||        '(booking_id, cust_name) tablespace users',      used_bytes => l_used_bytes,      alloc_bytes => l_alloc_bytes   );   dbms_output.put_line ('Used Bytes      = '||l_used_bytes);   dbms_output.put_line ('Allocated Bytes = '||l_alloc_bytes);end;/   The output is: Used Bytes      = 7501128Allocated Bytes = 12582912Suppose you want to use some parameters that will potentially increase the size of the index—for example, specifying an INITRANS parameter of 10. declare   l_used_bytes number;   l_alloc_bytes number;begin   dbms_space.create_index_cost (      ddl => 'create index in_bookings_hist_01 on bookings_hist '||        '(booking_id, cust_name) tablespace users initrans 10',      used_bytes => l_used_bytes,      alloc_bytes => l_alloc_bytes   );   dbms_output.put_line ('Used Bytes      = '||l_used_bytes);

Page 89: Oracle Important Notes

   dbms_output.put_line ('Allocated Bytes = '||l_alloc_bytes);end;/The output is: Used Bytes      = 7501128Allocated Bytes = 13631488Note the increase in the allocated bytes from specifying a higher INITRANS. Using this approach you can easily determine the impact of the index on storage space. You should be aware of two important caveats, however. First, this process applies only to tablespaces with SEGMENT SPACE MANAGEMENT AUTO turned on. Second, the package calculates the estimated size of the index from the statistics on the table. Hence it's very important to have relatively fresh statistics on the tables. But beware: the absence of statistics on the table will not result in an error in the use of the package, but will yield a wrong result. Estimating Table Size Suppose there is a table named BOOKINGS_HIST, which has the average row length of 30,000 rows and the PCTFREE parameter of 20. What if you wanted to increase the parameter PCT_FREE to 3—by what amount will the table increase in size? Because 30 is a 10% increase over 20, will the size go up by 10%? Instead of asking your psychic, ask the procedure CREATE_TABLE_COST inside the package DBMS_SPACE. Here is how you can estimate the size: declare   l_used_bytes number;   l_alloc_bytes number;begin   dbms_space.create_table_cost (       tablespace_name => 'USERS',       avg_row_size => 30,       row_count => 30000,       pct_free => 20,       used_bytes => l_used_bytes,       alloc_bytes => l_alloc_bytes   );   dbms_output.put_line('Used: '||l_used_bytes);   dbms_output.put_line('Allocated: '||l_alloc_bytes);end;/The output is: Used: 1261568Allocated: 2097152Changing the table's PCT_FREE parameter to 30 from 20, by specifying pct_free => 30we get the output: Used: 1441792Allocated: 2097152Note how the used space has increased from 1,261,568 to 1,441,792 because the PCT_FREE parameter conserves less room in the data block for user data. The increase is about 14%, not 10%, as expected. Using this package you can easily calculate the impact of parameters such as PCT_FREE on the size of the table, or of moving the table to a different tablespace. Predicting the Growth of a Segment It's holiday weekend and Acme Hotels is expecting a surge in demand. As a DBA, you are trying to understand the demand so that you can ensure there is enough space available. How do you predict the space utilization of the table?

Page 90: Oracle Important Notes

Just ask 10g; you will be surprised how accurately and intelligently it can make that prediction for you. You simply issue this query: select * from table(dbms_space.OBJECT_GROWTH_TREND ('ARUP','BOOKINGS','TABLE'));The function dbms_space.object_growth_trend() returns record in PIPELINEd format, which can be displayed by the TABLE() casting. Here is the output: TIMEPOINT                      SPACE_USAGE SPACE_ALLOC QUALITY------------------------------ ----------- ----------- ------------05-MAR-04 08.51.24.421081 PM       8586959    39124992 INTERPOLATED06-MAR-04 08.51.24.421081 PM       8586959    39124992 INTERPOLATED07-MAR-04 08.51.24.421081 PM       8586959    39124992 INTERPOLATED08-MAR-04 08.51.24.421081 PM     126190859  1033483971 INTERPOLATED09-MAR-04 08.51.24.421081 PM       4517094     4587520 GOOD10-MAR-04 08.51.24.421081 PM     127469413  1044292813 PROJECTED11-MAR-04 08.51.24.421081 PM     128108689  1049697234 PROJECTED12-MAR-04 08.51.24.421081 PM     128747966  1055101654 PROJECTED13-MAR-04 08.51.24.421081 PM     129387243  1060506075 PROJECTED14-MAR-04 08.51.24.421081 PM     130026520  1065910496 PROJECTEDThe output clearly shows the size of the table BOOKINGS at various times as shown in the column TIMEPOINT, in the TIMESTAMP datatype. The SPACE_ALLOC column shows the bytes allocated to the table and the SPACE_USAGE column shows how many of those bytes have been used. This information is collected by the Automatic Workload Repository, or AWR (see Week 6 of this series), every day. In the above output, the data was collected well on March 9, 2004, as indicated by the value of the column QUALITY - "GOOD." The space allocated and usage figures are accurate for that day. However, for all subsequent days, the value of this column is PROJECTED, indicating that the space calculations are projected from the data collected by the AWR facility—not collected directly from the segment. Note the values in this column prior to March 9—they are all INTERPOLATED. In other words, the value was not really collected or projected, but simply interpolated from the usage pattern for whatever data is available. Most likely the data was not available at that time and hence the values had to be interpolated. Conclusion With the availability of segment level manipulations you now have fine-grained control over how space is used inside a segment, which can be exploited to reclaim free space inside a table, reorganize the table rows to make it more compact online, and much more. These facilities help DBAs free themselves from the routine and mundane tasks like table reorganization. The online segment shrink feature is especially helpful in eliminating internal fragmentation and lowering the high water mark of the segment, which can significantly reduce the cost of a full table scan. For more information about the SHRINK operation, see this section of the Oracle Database SQL Reference. Learn more about the DBMS_SPACE package in Chapter 88 of the PL/SQL Packages and Types Reference. For a comprehensive review of all new space management features in Oracle Database 10g, read the Technical Whitepaper The Self-Managing Database: Proactive Space & Schema Object Management. Finally, a demo of Oracle Database 10g space management is available online.

Page: 1 2

Unused indexes waste space and have overhead for DML and hence they should be removed from the database. Tracking unused indexes is tricky and should be done with caution. Oracle 10g provides an easy approach for tracking unused indexes but

Page 91: Oracle Important Notes

for Oracle 9i and Oracle 8i, this is not an easy task. In this article I will discuss some ways to determine unused indexes in oracle databases.

Tracking Oracle 8i unused indexes:

Tracking unused indexes in Oracle8i involves running complex scripts to interrogate the Oracle 8i library cache and parse for index names. 

Tracking Oracle 9i unused indexes:

One of the great features of Oracle9i is the ability to easily locate and remove unused indexes.  When an index is not used by SQL queries with the cost-based optimizer, the unused indexes waste space and cause INSERT statements to run slower.

<index_name> monitoring usage command:

When you issue the alter index <index_name> monitoring usage command, Oracle places an entry in the v$object_usage view so you can see if the index is used.  This is just a bit-flag that is set to “1” when the index is accessed by any SQL statement. Below is a simple SQL*Plus script to track all index usage in all Oracle schemas:

set pages 999; set heading off; spool run_monitor.sql

select   'alter index '||owner||'.'||index_name||' monitoring usage;' from   dba_indexes where   owner not in ('SYS','SYSTEM','PERFSTAT') ;

spool off;

@run_monitor

After a significant amount of SQL has executed on the database you can query the new v$object_usage view:

select   index_name   mon,   used from   v$object_usage;

The v$object_usage has a single column called used which will be set to YES or NO.  This script will track the unused indexes but does not tell how many times the index has been used

INDEX_NAME              MON USED ---------------         --- ---- CUSTOMER_LAST_NAME_IDX  YES  NO   

Querying sys.object_usage fixed table:

The v$object_usage view uses the username logged into database when the alter  index index_name monitoring usage; command  is issued. The solution is to bypass the v$object_usage view and query the underlying sys.object_usage fixed table. 

Page 92: Oracle Important Notes

select     io.name, t.name,    decode(bitand(i.flags, 65536), 0, 'NO', 'YES'),    decode(bitand(ou.flags, 1), 0, 'NO', 'YES'),    u.start_monitoring,    ou.end_monitoring  from       sys.obj$ io,    sys.obj$ t,    sys.ind$ i,    sys.object_usage ou  where   io.owner# = userenv('SCHEMAID')  and  i.obj# = ou.obj#  and  io.obj# = ou.obj#  and  t.obj# = i.bo#;

Below is another possible solution

select    u.name "owner",    io.name "index_name",    t.name "table_name",    decode(bitand(i.flags, 65536), 0, 'no', 'yes') "monitoring",    decode(bitand(nvl(ou.flags,0), 1), 0, 'no', 'yes') "used",    ou.start_monitoring "start_monitoring",    ou.end_monitoring "end_monitoring" from    sys.obj$ io,    sys.obj$ t,    sys.ind$ i,    sys.object_usage ou, sys.user$ = u where    t.obj# =3d i.bo# and    io.owner# =3d u.user# and    io.obj# =3d i.obj# and    u.name not in ('sys','system') and    i.obj# =3d ou.obj#(+);

Tracking Oracle 10g unused indexes with AWR Query:

In Oracle10g enables one to easily see which indexes are used, when they are used and the context where they are used. Below is a simple AWR query to plot index usage

col c1 heading ‘Object|Name' format a30 col c2 heading ‘Operation' format a15 col c3 heading ‘Option' format a15 col c4 heading ‘Index|Usage|Count' format 999,999 break on c1 skip 2 break on c2 skip 2

select   p.object_name c1,   p.operation   c2,   p.options     c3,   count(1)      c4 from   dba_hist_sql_plan p,   dba_hist_sqlstat s where   p.object_owner <> 'SYS' and   p.operation like ‘%INDEX%' and   p.sql_id = s.sql_id group by   p.object_name,   p.operation,   p.options order by   1,2,3;

Page 93: Oracle Important Notes

Tracking Oracle 10g unused indexes and invocation count:

Below script tracks unused indexes, show the invocation count of all indexes and the columns referenced for multi-column indexes:

col c1 heading 'Begin|Interval|time' format a20 col c2 heading 'Search Columns' format 999,999 col c2 heading 'Invocation|Count' format a20

break on c1 skip 2 select   begin_interval_time c1,   count(*) c3 from   dba_hist_sqltext natural join   dba_hist_snapshot where   lower(sql_text) like lower('%cust_name_idx%')

Here is the output showing index usage

Invocation Counts for cust_index Begin Interval Invocation time Search Columns Count -------------------- -------------- ----------- 04-10-21 15 1 3 04-10-10 16 0 1 04-10-10 19 1 1 04-10-11 02 0 2 04-10-11 04 2 1 04-10-11 06 3 1 04-10-11 11 0 1

Below script can detect Oracle 10g duplicate index

select /*+ rule */    a.table_owner,    a.table_name,    a.index_owner,    a.index_name,    column_name_list,    column_name_list_dup,    dup duplicate_indexes,    i.uniqueness,    i.partitioned,    i.leaf_blocks,    i.distinct_keys,    i.num_rows,    i.clustering_factor from   (    select       table_owner,       table_name,       index_owner,       index_name,       column_name_list_dup,       dup,

Page 94: Oracle Important Notes

      max(dup) OVER        (partition by table_owner, table_name, index_name) dup_mx    from       (        select           table_owner,           table_name,           index_owner,           index_name,           substr(SYS_CONNECT_BY_PATH(column_name, ','),2)             column_name_list_dup,           dup        from           (           select             index_owner,             index_name,             table_owner,             table_name,             column_name,             count(1) OVER              (partition by                  index_owner,                  index_name) cnt,              ROW_NUMBER () OVER                (partition by                   index_owner,                   index_name                 order by column_position) as seq,              count(1) OVER                (partition by                   table_owner,                   table_name,                   column_name,                   column_position) as dup    from       sys.dba_ind_columns    where       index_owner not in ('SYS', 'SYSTEM','DLOBAUGH')

) where    dup!=1 start with seq=1 connect by prior seq+1=seq and prior index_owner=index_owner and prior index_name=index_name )) a, ( select    table_owner,    table_name,    index_owner,    index_name,    substr(SYS_CONNECT_BY_PATH(column_name, ','),2) column_name_list

Page 95: Oracle Important Notes

from ( select index_owner, index_name, table_owner, table_name, column_name, count(1) OVER ( partition by index_owner, index_name) cnt, ROW_NUMBER () OVER ( partition by index_owner, index_name order by column_position) as seq from sys.dba_ind_columns where index_owner not in ('SYS', 'SYSTEM')) where seq=cnt start with seq=1 connect by prior seq+1=seq and prior index_owner=index_owner and prior index_name=index_name ) b, dba_indexes i where     a.dup=a.dup_mx and a.index_owner=b.index_owner and a.index_name=b.index_name and a.index_owner=i.owner and a.index_name=i.index_name order by    a.table_owner, a.table_name, column_name_list_dup;

Oracle Import Tuning:

There is no direct tuning option available for imports and hence it has more tuning limitations than other utilities. The DBA should consider the following when trying to optimize import performance

Set analyze=n and analyze with dbms_stats after the load has completed. Many large companies use partitioned tables, and keep the current partition

on SSD for fast imports. The recordlength needs to be a multiple of your I/O chunk size and

db_block_size (or dbnnk_block_size). commit=n should be set for tables that can afford not to commit until the end

of the load. However larger tables may not be suitable for this option due to the required rollback/undo space.

A single large rollback segment can be created that take all others offline during the import.

Index creation can be postponed until after import completes, by specifying indexes=n .  Setting indexes=n eliminates this maintenance overhead.   You can also Use the indexfile parm to rebuild all the indexes once, after the data is loaded.

By using a larger buffer setting, import can do more work before disk access is performed. 

You can also use the hidden parameter _disable_logging = true to reduce redo, but beware that the resulting import will be unrecoverable.

Oracle Exact Performance Gains:

Exact performance gain can be achieved depending upon the following factors

Page 96: Oracle Important Notes

DB_BLOCK_SIZE The types of columns in your table The I/O layout

Oracle SQL*Loader Tuning:

Oracle SQL*Loader  is flexible and offers many options that should be considered to maximize the speed of data loads.  These include:

The direct path loader ( direct=true ) loads directly into the Oracle data files and creates blocks in Oracle database block format.  To prepare the database for direct path loads, the script $ORACLE_HOME/rdbms/admin/catldr.sql .sql must be executed.

Disabling of indexes and constraints for conventional data loads can greatly enhance the performance of SQL*Loader.  

Larger bind arrays limit the number of calls to the database and increase performance for conventional data loads only. 

rows specifies the number of rows per commit.  If you issue fewer commits for rows then it will enhance performance for conventional data loads. 

Parallel Loads option allows multiple SQL*Loader jobs to execute concurrently. Fixed width data format saves Oracle some processing when parsing the

data.  Disabling Archiving During Load option may not be feasible in certain

environments; however disabling database archiving can increase performance considerably.

The unrecoverable load data option disables the writing of the data to the redo logs.  This option is available for direct path loads only.

Oracle INSERT performance Tuning:

When using standard SQL statements to load Oracle data tables, there are several tuning approaches:

Manage segment header contention for parallel inserts. Make sure to define multiple freelist (or freelist groups) to remove contention

for the table header. You can invoke parallel DML to have multiple inserts into the same table. By using the APPEND hint, you ensure that Oracle always grabs fresh data

blocks by raising the high-water-mark for the table. If you are doing parallel insert DML, the Append mode is the default and you don't need to specify an APPEND hint.  

By defining large (i.e. 32k) blocksizes for the target table, you reduce I/O because more rows fit onto a block before a "block full" condition (as set by PCTFREE) unlinks the block from the freelist.

It's far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size.

For databases that require high-speed loads, some shops define the insert table partition on solid-state disk later moving it to platter disk. If the Oracle level processing for insert and update activities could be optimized for SSD, significant performance gains might be realized during these activities.

Page 97: Oracle Important Notes

Oracle Parsing:

Parsing is the first step in the processing of any statement in an Oracle database. The statements are broken down into its component parts and the type of statement whether DML, or DDL is determined and various checks are performed on it. A statement must be evaluated and validated before execution. Oracle evaluates the statements for syntax, validation of objects and the privileges assigned to user.

Types of Oracle Parsing:

There are two types of parsing in oracle

Soft Parsing Hard Parsing

In soft parsing the statement is already parsed and available in memory.

In hard parsing all parsing steps are to be carried out.

Soft parse considerably improve the system performance.

Hard parsing affects the system. Reducing hard parsing improves the resource utilization and optimizes the SQL code.

Oracle Parsing Process:

Oracle parsing process follows below steps in order to execute the SQL statement and arrive at the output.

Syntactical check:

In syntactical check the query is checked for its syntax.

Semantic check:

In semantic check the query is checked for the validity of the objects being referred in the statement and the privileges available to the user firing the statement. This is a data dictionary check.

Allocation:

This step includes the allocation of private SQL area in the memory for the statement.

Generating Parsed Representation and allocation Shared SQL area:

In this step a parsed representation of the statement is generated and shared SQL area is allocated. This involves finding an optimal execution path for the statement. Oracle first checks if the same statement is already parsed and exists in the memory. If yes then soft parse will be done in which the parsed representation will be picked up and the statement will be executed immediately. However if the

Page 98: Oracle Important Notes

statement is not found then hard parsing will be done where the parsed representation is generated and stored in a shared SQL area and then the statement is executed.

Deciding the type of parsing:

Oracle does the following in order to decide on a soft parse or hard parse.

Comparing new hash value with existing hash value:

When a new statement is fired, a hash value is generated for the text string. Oracle checks if this new hash value matches with any existing hash value in the shared pool.

Comparing text string with hash value:

In this step the text string of the new statement is compared with the hash value matching statements.

Comparing referred objects with matching statements objects:

If a match is found, the objects referred in the new statement are compared with the matching statement objects. The bind variable types of the new statement should be of same type as the identified matching statement.

Deciding the parse type:

If all of the above is satisfied, Oracle either uses the soft parse by re-using the existing parse. However if a match is not found, Oracle follows hard parse and goes through the process of parsing the statement and putting it in the shared pool.

Reducing hard parsing:

Below is a guideline that can be taken to reduce hard parsing.

Use bind variables instead of hard-coding values. Avoid code repetition. Write generic routines that can be called from different places. Avoid separate parsing of similar statements. Change such statements to be

look-alike or put them in a common routine so that a single parse can take care of all calls to the statement.

Identifying Unnecessary Parse Calls:

Check for statements with a lot of executions. Avoid PARSE_CALLS value close to the EXECUTIONS value.

select parse_calls, executions, substr(sql_text, 1, 300) from v$sqlarea where command_type in (2, 3, 6, 7);

Page 99: Oracle Important Notes

In the below code the sessions involve lot of re-parsing. Query these sessions from V$SESSION and then locate the program that is being executed, resulting in so much parsing.

select b.sid, a.name, b.value from v$sesstat b, v$statname a where a.name in ('parse count (hard)', 'execute count') and b.statistic# = a.statistic# order by sid;

select a.parse_calls, a.executions, substr(a.sql_text, 1, 300) from v$sqlarea a, v$session b where b.schema# = a.parsing_schema_id and b.sid = <:sid> order by 1 desc;

Provide enough private SQL area to accommodate all of the SQL statements for a session. Depending on the requirement, the parameter OPEN_CURSORS may need to be reset to a higher value. Set the SESSION_CACHED_CURSORS to a higher value to allow more cursors to be cached at session level and to avoid re-parsing.

Identifying the opened cursors:

The below code will help in identifying the open cursors for a session and how near the count is to the OPEN_CURSORS parameter value. If the margin is very small, consider increasing the OPEN_CURSORS parameter.

select a.username, a.sid, b.value from v$session a, v$sesstat b, v$statname c where b.sid = a.sid and c.statistic# = b.statistic# and c.name = 'opened cursors current' order by 3 desc;

Evaluating cached cursors for sessions:

The CACHE_CNT ('session cursor cache hits') of a session should be compared to the PARSE_CNT ('parse count (total)'), if the difference is high, consider increasing the SESSION_CACHED_CURSORS parameter.

select a.sid, a.value parse_cnt, (select x.value from v$sesstat x, v$statname y where x.sid = a.sid and y.statistic# = x.statistic# and y.name = 'session cursor cache hits') cache_cnt from v$sesstat a, v$statname b where b.statistic# = a.statistic# and b.name = 'parse count (total)' and value > 0;

The following parse related information is available in V$SYSSTAT and V$SESSTAT views, connect with V$STATNAME using STATISTIC# column.

Page 100: Oracle Important Notes

SQL> select * from v$statname where name like '%parse%';

Utilizing Shared SQL area:

The shared SQL area can be further utilized for identical as well as some-what similar queries by setting the initialization parameter CURSOR_SHARING to FORCE. The default value is EXACT. Try out this parameter for your application in test mode before making changes in production.

Preventing Ageing out:

Set the parameter SHARED_POOL_RESERVED_SIZE to a larger value to prevent large packages from being aged out because of new entries. A large overhead is involved in reloading a large package that was aged out.

Pinning:

Pin frequent objects in memory using the DBMS_SHARED_POOL package. Use it to pin most frequently used objects that should be in memory while the instance is up. Pin objects when the instance starts to avoid memory fragmentation. Below code provides a list of frequently used and re-loaded objects

select loads, executions, substr(owner, 1, 15) "Owner", substr(namespace, 1, 20) "Type", substr(name, 1, 100) "Text" from v$db_object_cache order by executions desc;

In order to pin a package in memory and to view the list of pinned objects, use below syntax

SQL>exec dbms_shared_pool.keep('standard', 'p');

select substr(owner, 1, 15) "Owner", substr(namespace, 1, 20) "Type", substr(name, 1, 100) "Text" from v$db_object_cache where kept = 'YES';

Increasing the shared pool size:

The size of the shared pool can be increased by setting the parameter SHARED_POOL_SIZE in the initialization file. Increasing the shared pool size is an immediate solution, but the above steps need to be carried out to optimize the database in the long run.

CPU USED BY THE SESSION

SELECT substr(a.sid,1,5) "Sid",

Page 101: Oracle Important Notes

       substr(a.process,1,7) "Process",       substr(a.username,1,20) "User",       v.value "CPU used by this session"  FROM v$statname s, v$sesstat v, v$session a WHERE s.name = 'CPU used by this session'   and v.statistic#=s.statistic#   AND v.sid = a.sid   and v.value > 1000ORDER BY v.value DESC;

 

MEMORY USED BY THE SESSION

SELECTe.SID,e.username,e.status,a.UGA_MEMORY,b.PGA_MEMORYFROM-- Current UGA size for the session.(select y.SID, TO_CHAR(ROUND(y.value/1024),99999999) || ' KB' UGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session uga memory') a,-- Current PGA size for the session.(select y.SID, TO_CHAR(ROUND(y.value/1024),99999999) || ' KB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b,v$session eWHERE e.sid=a.sidAND e.sid=b.sidORDER BYe.status,a.UGA_MEMORY desc

Dialect of UNIX RAM display command

DEC-UNIX uerf -r 300 | grep -i mem

Solaris prtconf|grep -i mem

AIX lsdev -C|grep mem

Linux free

HP/UX swapinfo -tm

Pre-requisites

Page 102: Oracle Important Notes

A valid full database backup of the target database RMAN> list backup summary;

List of Backups===============Key     TY LV S Device Type Completion Time #Pieces #Copies Tag------- -- -- - ----------- --------------- ------- ------- ---14      B  A  A DISK        03-NOV-04       1       1       TAG20041103T16333415      B  F  A DISK        03-NOV-04       1       1       TAG20041103T16333616      B  A  A DISK        03-NOV-04       1       1       TAG20041103T16365117      B  F  A DISK        03-NOV-04       1       1Target database must be mounted or open $ sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.5.0 - Production on Wed Nov 3 16:42:34 2004

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to an idle instance.

SQL> startup openORACLE instance started.

Total System Global Area  252777660 bytesFixed Size                   451772 bytesVariable Size             218103808 bytesDatabase Buffers           33554432 bytesRedo Buffers                 667648 bytesDatabase mounted.Database opened.SQL> exit

Steps Required

1. Create Password File for Auxiliary Database

$ orapwd file=/u01/app/oracle/product/9.2.0/dbs/orapwTESTDB password=change_on_install

2. Create an Initialization Parameter for the Auxiliary Database

Copy the initialization parameter from the target database and make the necessary changes for the duplicated database.

$ export ORACLE_SID=ORA920$ sqlplus "/ as sysdba"

SQL> create pfile='/u01/app/oracle/product/9.2.0/dbs/initTESTDB.ora' from spfile;

File created.

Page 103: Oracle Important Notes

After creating the initialization parameter for the duplicate database, change at least the following parameters:

db_file_name_convert = ('/u06/app/oradata/ORA920', '/u06/app/oradata/TESTDB')log_file_name_convert = ('/u03/app/oradata/ORA920', '/u03/app/oradata/TESTDB',                         '/u04/app/oradata/ORA920', '/u04/app/oradata/TESTDB',                         '/u05/app/oradata/ORA920', '/u05/app/oradata/TESTDB')control_files = '/u03/app/oradata/TESTDB/control01.ctl'              , '/u04/app/oradata/TESTDB/control02.ctl'              , '/u05/app/oradata/TESTDB/control03.ctl'db_name              = 'TESTDB'instance_name        = 'TESTDB'audit_file_dest      = '/u01/app/oracle/admin/TESTDB/adump'background_dump_dest = '/u01/app/oracle/admin/TESTDB/bdump'core_dump_dest       = '/u01/app/oracle/admin/TESTDB/cdump'user_dump_dest       = '/u01/app/oracle/admin/TESTDB/udump'service_names        = 'TESTDB.IDEVELOPMENT.INFO'dispatchers          = '(PROTOCOL=TCP) (SERVICE=TESTDBXDB)'log_archive_dest_1   = 'location=/u06/app/oradata/TESTDB/archive mandatory'

3. Create / Start the Auxiliary Instance

Create all needed directories for the duplicate database:

$ mkdir /u01/app/oracle/admin/TESTDB$ mkdir /u01/app/oracle/admin/TESTDB/adump$ mkdir /u01/app/oracle/admin/TESTDB/bdump$ mkdir /u01/app/oracle/admin/TESTDB/cdump$ mkdir /u01/app/oracle/admin/TESTDB/create$ mkdir /u01/app/oracle/admin/TESTDB/pfile$ mkdir /u01/app/oracle/admin/TESTDB/scripts$ mkdir /u01/app/oracle/admin/TESTDB/udump

$ mkdir /u03/app/oradata/TESTDB$ mkdir /u04/app/oradata/TESTDB$ mkdir /u05/app/oradata/TESTDB$ mkdir /u06/app/oradata/TESTDB$ mkdir /u06/app/oradata/TESTDB/archive

$ export ORACLE_SID=TESTDB

$ sqlplus "/ as sysdba"

SQL> startup nomount4. Ensure Oracle Net Connectivity to Auxiliary Database

Modify both the listener.ora and tnsnames.ora file to be able to connect to the auxiliary database. After making changes to the networking files, test the connection keeping in mind that you must be able to connect to the auxiliary instance with SYSDBA privileges, so a valid password file must exist.

Page 104: Oracle Important Notes

$ sqlplus "sys/change_on_install@TESTDB as sysdba"

SQL*Plus: Release 9.2.0.5.0 - Production on Wed Nov 3 17:43:22 2004

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to:Oracle9i Enterprise Edition Release 9.2.0.5.0 - ProductionWith the Partitioning, OLAP and Oracle Data Mining optionsJServer Release 9.2.0.5.0 - Production

SQL>5. Mount or Open the Target Database

As mentioned in the pre-requisites section of this article, the target database should be either opened or mounted.

$ export ORACLE_SID=ORA920$ sqlplus "/ as sysdba"SQL> startup open

6. Ensure You Have the Necessary Backups and Archived Redo Log Files

As mentioned in the pre-requisites section of this article, ensure that you have a current backup that you wish to use to create the duplicate database. Login to query the RMAN catalog:

$ rman target sys/change_on_install@ORA920

RMAN> list backup summary;

List of Backups===============Key     TY LV S Device Type Completion Time #Pieces #Copies Tag------- -- -- - ----------- --------------- ------- ------- ---14      B  A  A DISK        03-NOV-04       1       1       TAG20041103T16333415      B  F  A DISK        03-NOV-04       1       1       TAG20041103T16333616      B  A  A DISK        03-NOV-04       1       1       TAG20041103T16365117      B  F  A DISK        03-NOV-04       1       1

7. Login to Target and Auxiliary Database using RMAN 8. $ rman target sys/change_on_install@ORA920 auxiliary

sys/[email protected]. Recovery Manager: Release 9.2.0.5.0 - Production11.12. Copyright (c) 1995, 2002, Oracle Corporation.  All rights reserved.13.14. connected to target database: ORA920 (DBID=2542332757)15. connected to auxiliary database: TESTDB (not mounted)16.

RMAN>17. Run the RMAN DUPLICATE DATABASE Command

Page 105: Oracle Important Notes

The following RUN block can be used to fully duplicate the target database from the latest full backup.

 

Note that you can duplicate the database to a specific date/time using the UNTIL TIME '<DATE>' clause. For example, to duplicate the new database to yesterdays date/time, use the following:

duplicate target database to TESTDB until time 'SYSDATE-1';.

run {    # Allocate the channel for the duplicate work    allocate auxiliary channel ch1 type disk;

    # Duplicate the database to TESTDB    duplicate target database to TESTDB;}

using target database controlfile instead of recovery catalogallocated channel: ch1channel ch1: sid=13 devtype=DISK

Starting Duplicate Db at 03-NOV-04

printing stored script: Memory Script{   set until scn  14757706;   set newname for datafile  1 to "/u06/app/oradata/TESTDB/system01.dbf";   set newname for datafile  2 to "/u06/app/oradata/TESTDB/undotbs1_01.dbf";   set newname for datafile  3 to "/u06/app/oradata/TESTDB/cwmlite01.dbf";   set newname for datafile  4 to "/u06/app/oradata/TESTDB/drsys01.dbf";   set newname for datafile  5 to "/u06/app/oradata/TESTDB/indx01.dbf";   set newname for datafile  6 to "/u06/app/oradata/TESTDB/odm01.dbf";   set newname for datafile  7 to "/u06/app/oradata/TESTDB/tools01.dbf";   set newname for datafile  8 to "/u06/app/oradata/TESTDB/users01.dbf";   set newname for datafile  9 to "/u06/app/oradata/TESTDB/xdb01.dbf";   set newname for datafile  10 to "/u06/app/oradata/TESTDB/example01.dbf";   set newname for datafile  11 to "/u06/app/oradata/TESTDB/perfstat01.dbf";   set newname for datafile  12 to "/u06/app/oradata/TESTDB/users02.dbf";   restore   check readonly   clone database   ;}executing script: Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Page 106: Oracle Important Notes

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 03-NOV-04

channel ch1: starting datafile backupset restorechannel ch1: specifying datafile(s) to restore from backup setrestoring datafile 00001 to /u06/app/oradata/TESTDB/system01.dbfrestoring datafile 00002 to /u06/app/oradata/TESTDB/undotbs1_01.dbfrestoring datafile 00003 to /u06/app/oradata/TESTDB/cwmlite01.dbfrestoring datafile 00004 to /u06/app/oradata/TESTDB/drsys01.dbfrestoring datafile 00005 to /u06/app/oradata/TESTDB/indx01.dbfrestoring datafile 00006 to /u06/app/oradata/TESTDB/odm01.dbfrestoring datafile 00007 to /u06/app/oradata/TESTDB/tools01.dbfrestoring datafile 00008 to /u06/app/oradata/TESTDB/users01.dbfrestoring datafile 00009 to /u06/app/oradata/TESTDB/xdb01.dbfrestoring datafile 00010 to /u06/app/oradata/TESTDB/example01.dbfrestoring datafile 00011 to /u06/app/oradata/TESTDB/perfstat01.dbfrestoring datafile 00012 to /u06/app/oradata/TESTDB/users02.dbfchannel ch1: restored backup piece 1piece handle=/orabackup/ORA920/backup_DB_ORA920_S_16_P_16_T_541269216 tag=TAG20041103T163336 params=NULLchannel ch1: restore completeFinished restore at 03-NOV-04sql statement: CREATE CONTROLFILE REUSE SET DATABASE "TESTDB" RESETLOGS ARCHIVELOG  MAXLOGFILES     32  MAXLOGMEMBERS      5  MAXDATAFILES      600  MAXINSTANCES    10  MAXLOGHISTORY     1134 LOGFILE  GROUP  1 ( '/u03/app/oradata/TESTDB/redo_g01a.log', '/u04/app/oradata/TESTDB/redo_g01b.log', '/u05/app/oradata/TESTDB/redo_g01c.log' ) SIZE  104857600  REUSE,  GROUP  2 ( '/u03/app/oradata/TESTDB/redo_g02a.log', '/u04/app/oradata/TESTDB/redo_g02b.log', '/u05/app/oradata/TESTDB/redo_g02c.log' ) SIZE  104857600  REUSE,

Page 107: Oracle Important Notes

  GROUP  3 ( '/u03/app/oradata/TESTDB/redo_g03a.log', '/u04/app/oradata/TESTDB/redo_g03b.log', '/u05/app/oradata/TESTDB/redo_g03c.log' ) SIZE  104857600  REUSE DATAFILE  '/u06/app/oradata/TESTDB/system01.dbf' CHARACTER SET WE8ISO8859P1

printing stored script: Memory Script{   switch clone datafile all;}executing script: Memory Scriptdatafile 2 switched to datafile copyinput datafilecopy recid=1 stamp=541274660 filename=/u06/app/oradata/TESTDB/undotbs1_01.dbfdatafile 3 switched to datafile copyinput datafilecopy recid=2 stamp=541274660 filename=/u06/app/oradata/TESTDB/cwmlite01.dbfdatafile 4 switched to datafile copyinput datafilecopy recid=3 stamp=541274660 filename=/u06/app/oradata/TESTDB/drsys01.dbfdatafile 5 switched to datafile copyinput datafilecopy recid=4 stamp=541274660 filename=/u06/app/oradata/TESTDB/indx01.dbfdatafile 6 switched to datafile copyinput datafilecopy recid=5 stamp=541274660 filename=/u06/app/oradata/TESTDB/odm01.dbfdatafile 7 switched to datafile copyinput datafilecopy recid=6 stamp=541274660 filename=/u06/app/oradata/TESTDB/tools01.dbfdatafile 8 switched to datafile copyinput datafilecopy recid=7 stamp=541274660 filename=/u06/app/oradata/TESTDB/users01.dbfdatafile 9 switched to datafile copyinput datafilecopy recid=8 stamp=541274660 filename=/u06/app/oradata/TESTDB/xdb01.dbfdatafile 10 switched to datafile copyinput datafilecopy recid=9 stamp=541274660 filename=/u06/app/oradata/TESTDB/example01.dbfdatafile 11 switched to datafile copyinput datafilecopy recid=10 stamp=541274660 filename=/u06/app/oradata/TESTDB/perfstat01.dbfdatafile 12 switched to datafile copyinput datafilecopy recid=11 stamp=541274660 filename=/u06/app/oradata/TESTDB/users02.dbf

printing stored script: Memory Script{   set until scn  14757706;   recover   clone database    delete archivelog   ;}

Page 108: Oracle Important Notes

executing script: Memory Script

executing command: SET until clause

Starting recover at 03-NOV-04

starting media recovery

channel ch1: starting archive log restore to default destinationchannel ch1: restoring archive logarchive log thread=1 sequence=151channel ch1: restored backup piece 1piece handle=/orabackup/ORA920/backup_DB_ORA920_S_17_P_17_T_541269412 tag=TAG20041103T163651 params=NULLchannel ch1: restore completearchive log filename=/u06/app/oradata/TESTDB/archive/arch_t1_s151.dbf thread=1 sequence=151channel clone_default: deleting archive log(s)archive log filename=/u06/app/oradata/TESTDB/archive/arch_t1_s151.dbf recid=1 stamp=541274663media recovery completeFinished recover at 03-NOV-04

printing stored script: Memory Script{   shutdown clone;   startup clone nomount ;}executing script: Memory Script

database dismountedOracle instance shut down

connected to auxiliary database (not started)Oracle instance started

Total System Global Area     252777660 bytes

Fixed Size                      451772 bytesVariable Size                218103808 bytesDatabase Buffers              33554432 bytesRedo Buffers                    667648 bytessql statement: CREATE CONTROLFILE REUSE SET DATABASE "TESTDB" RESETLOGS ARCHIVELOG  MAXLOGFILES     32  MAXLOGMEMBERS      5  MAXDATAFILES      600  MAXINSTANCES    10  MAXLOGHISTORY     1134 LOGFILE  GROUP  1 ( '/u03/app/oradata/TESTDB/redo_g01a.log', '/u04/app/oradata/TESTDB/redo_g01b.log', '/u05/app/oradata/TESTDB/redo_g01c.log' ) SIZE  104857600  REUSE,

Page 109: Oracle Important Notes

  GROUP  2 ( '/u03/app/oradata/TESTDB/redo_g02a.log', '/u04/app/oradata/TESTDB/redo_g02b.log', '/u05/app/oradata/TESTDB/redo_g02c.log' ) SIZE  104857600  REUSE,  GROUP  3 ( '/u03/app/oradata/TESTDB/redo_g03a.log', '/u04/app/oradata/TESTDB/redo_g03b.log', '/u05/app/oradata/TESTDB/redo_g03c.log' ) SIZE  104857600  REUSE DATAFILE  '/u06/app/oradata/TESTDB/system01.dbf' CHARACTER SET WE8ISO8859P1

printing stored script: Memory Script{   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/undotbs1_01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/cwmlite01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/drsys01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/indx01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/odm01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/tools01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/users01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/xdb01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/example01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/perfstat01.dbf";   catalog clone datafilecopy  "/u06/app/oradata/TESTDB/users02.dbf";   switch clone datafile all;}executing script: Memory Script

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/undotbs1_01.dbf recid=1 stamp=541274721

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/cwmlite01.dbf recid=2 stamp=541274721

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/drsys01.dbf recid=3 stamp=541274721

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/indx01.dbf recid=4 stamp=541274721

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/odm01.dbf recid=5 stamp=541274721

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/tools01.dbf recid=6 stamp=541274721

cataloged datafile copy

Page 110: Oracle Important Notes

datafile copy filename=/u06/app/oradata/TESTDB/users01.dbf recid=7 stamp=541274721

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/xdb01.dbf recid=8 stamp=541274721

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/example01.dbf recid=9 stamp=541274721

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/perfstat01.dbf recid=10 stamp=541274721

cataloged datafile copydatafile copy filename=/u06/app/oradata/TESTDB/users02.dbf recid=11 stamp=541274721

datafile 2 switched to datafile copyinput datafilecopy recid=1 stamp=541274721 filename=/u06/app/oradata/TESTDB/undotbs1_01.dbfdatafile 3 switched to datafile copyinput datafilecopy recid=2 stamp=541274721 filename=/u06/app/oradata/TESTDB/cwmlite01.dbfdatafile 4 switched to datafile copyinput datafilecopy recid=3 stamp=541274721 filename=/u06/app/oradata/TESTDB/drsys01.dbfdatafile 5 switched to datafile copyinput datafilecopy recid=4 stamp=541274721 filename=/u06/app/oradata/TESTDB/indx01.dbfdatafile 6 switched to datafile copyinput datafilecopy recid=5 stamp=541274721 filename=/u06/app/oradata/TESTDB/odm01.dbfdatafile 7 switched to datafile copyinput datafilecopy recid=6 stamp=541274721 filename=/u06/app/oradata/TESTDB/tools01.dbfdatafile 8 switched to datafile copyinput datafilecopy recid=7 stamp=541274721 filename=/u06/app/oradata/TESTDB/users01.dbfdatafile 9 switched to datafile copyinput datafilecopy recid=8 stamp=541274721 filename=/u06/app/oradata/TESTDB/xdb01.dbfdatafile 10 switched to datafile copyinput datafilecopy recid=9 stamp=541274721 filename=/u06/app/oradata/TESTDB/example01.dbfdatafile 11 switched to datafile copyinput datafilecopy recid=10 stamp=541274721 filename=/u06/app/oradata/TESTDB/perfstat01.dbfdatafile 12 switched to datafile copyinput datafilecopy recid=11 stamp=541274721 filename=/u06/app/oradata/TESTDB/users02.dbf

printing stored script: Memory Script{

Page 111: Oracle Important Notes

   Alter clone database open resetlogs;}executing script: Memory Script

database openedFinished Duplicate Db at 03-NOV-04

RMAN> exit18. Create All tempfiles for Temporary Tablespace

In almost all cases, you will need to create the tempfiles for your temporary tablespace:

$ export ORACLE_SID=TESTDB

$ sqlplus "/ as sysdba"

SQL> alter tablespace temp add tempfile '/u06/app/oradata/TESTDB/temp01.dbf'  2  size 524288000 reuse autoextend on next 524288000 maxsize 1500M;

Tablespace altered.

How to enable trace in Oracle

1. Enable trace at instance level

Put the following line in init.ora. It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace:

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUS> ALTER SYSTEM SET trace_enabled = TRUE;

to stop trace run:

SQLPLUS> ALTER SYSTEM SET trace_enabled = FALSE;

2. Enable trace at session level

Page 112: Oracle Important Notes

to start trace:

ALTER SESSION SET sql_trace = TRUE;

to stop trace:

ALTER SESSION SET sql_trace = FALSE;

- or -

EXECUTE dbms_session.set_sql_trace (TRUE);EXECUTE dbms_session.set_sql_trace (FALSE);

- or -

EXECUTE dbms_support.start_trace;EXECUTE dbms_support.stop_trace;

3. Enable trace in another session

Find out SID and SERIAL# from v$session. For example:

SELECT * FROM v$session WHERE osuser = OSUSER;

to start trace:

EXECUTE dbms_support.start_trace_in_session (SID, SERIAL#);

to stop trace:

EXECUTE dbms_support.stop_trace_in_session (SID, SERIAL#);

- or -

EXECUTE dbms_system.set_sql_trace_in_session (SID, SERIAL#, TRUE);EXECUTE dbms_system.set_sql_trace_in_session (SID, SERIAL#, FALSE);

which will prevent corruption from getting to your disks (at the cost of a database crash).

For tracing of a MAX_CURSORS exceeded error:

event="1000 trace name ERRORSTACK level 3"

To get an error stack related to a SQLNET ORA-03120 error:

Page 113: Oracle Important Notes

event="3120 trace name error stack"

To work around a space leak problem:

event="10262 trace name context forever, level x"

where x is the size of space leak to ignore.

To trace memory shortages:

event="10235 trace name context forever, level 4"

event="600 trace name heapdump, level 4"

To take a shared pool heapdump to track Ora-04031 as the error occurs, set the following event in your init.ora file:

event = "4031 trace name heapdump forever, level 2"

For ORA-04030  errors: Take a dump by setting this event in your INIT file and analyze the trace file. This will clearly pinpoint the problem.

event="4030 trace name errorstack level 3"

The following undocumented SQL statements can be used to obtain information about internal database structures:

* To dump the control file:

alter session set events 'immediate trace name CONTROLF level 10'

* To dump the file headers:

alter session set events 'immediate trace name FILE_HDRS level 10'

* To dump redo log headers:

alter session set events 'immediate trace name REDOHDR level 10'

* To dump the system state:

alter session set events 'immediate trace name SYSTEMSTATE level 10'

* To dump the optimizer statistics whenever a SQL statement is parsed:

alter session set events '10053 trace name context forever'

* To prevent db block corruptions:

event = "10210 trace name context forever, level 10"

Page 114: Oracle Important Notes

event = "10211 trace name context forever, level 10"

event = "10231 trace name context forever, level 10"

* To enable the maximum level of SQL performance monitoring:

event = "10046 trace name context forever, level 12"

* To enable a memory-protect cursor:

event = "10049 trace name context forever, level  2"

* To perform data-block checks:

event = "10210 trace name context forever, level  2"

* To perform index-block checks:

event = "10211 trace name context forever, level  2"

* To perform memory-heap checks:

event = "10235 trace name context forever, level  1"

* To allow 300 bytes memory leak for each connection:

event = "10262 trace name context forever, level 300"

You should be noticing a pattern here for tracing events related to error codes: the first argument in the EVENT is the error code followed by the action you want to take upon receiving the code.

Events at the Session Level

Events are also used as the SESSION level using the ALTER SESSION command or calls to the DBMS_SYSTEM.SET_EV() procedure. The general format for the ALTER SESSION command is:

ALTER SESSION SET EVENTS 'ev_number ev_text level x';

where:

  Ev_number is the event number.

    Ev_text is any required text (usually "trace name context forever").

    x is the required level setting corresponding to the desire action, file, or       other  required data.

For example, to provide more detailed SQL trace information:

Page 115: Oracle Important Notes

ALTER SESSION SET EVENTS '10046 trace name context forever level NN'

where NN:

1 is same as a regular trace.

4 means also dump bind variables

8 means also dump wait information

12 means dump both bind and wait information

Example Uses of the ALTER SESSION Command to Set EVENT Codes

To coalesce free space in a tablespace pre-version 7.3:

ALTER SESSION SET EVENTS 'immediate trace name coalesce level XX'

where:

             XX is the value of ts# from ts$ table for the tablespace.

To coalesce free space in a tablespace defined as temporary:

ALTER SESSION SET EVENTS 'immediate trace name drop_segments level &x';

where:

   x is the value for file# from ts$ plus 1.

To get the information out of the db block buffers regarding order of LRU chains:

ALTER SESSION SET EVENTS 'immediate trace name buffers level x';

where:

   x  is 1-3 for buffer header order or 4-6 for LRU chain order.

To correct transportable tablespace export hanging (reported on 8.1.6, 8.1.7 on HPUX, a known bug):

ALTER SESSION SET EVENT '10297 trace name context forever, level 1';

To cause "QKA Disable GBY sort elimination". This affects how Oracle will process sorts:

ALTER SESSION SET EVENTS'10119 trace name context forever';

* You can disable the Index FFS using the event 10156. In this case, CBO will lean toward FTS or Index scan.

Page 116: Oracle Important Notes

* You can set the event 10092 if you want to disable the hash joins completely.

It is very easy to see how SMON cleans up rollback entries by using the event 10015. You can use event 10235 to check how the memory manager works internally.

CBO is definitely not a mystery. Use event 10053 to give the detail of the various plans considered, depending on the statistics available; be careful using this for large multi-table joins, as the report can be quite lengthy! The data density, sparse characteristics, index availability, and index depth all lead the optimizer to make its decisions. You can see the running commentary in trace files generated by the10053 event.

This is an excerpt from Mike Ault, bestselling author of "Oracle

Virtual Indexes are an undocumented feature used by Oracle. These are pseudo-indexes that will not behave the same way that normal indexes behave, and are meant for a very specific purpose.

Creating a Virtual Index:

A virtual index is created in a slightly different manner than the normal indexes. A virtual index has no segment attached to it, i.e., the DBA_SEGMENTS view will not show an entry for this. Oracle handles such indexes internally and few required dictionary tables are updated so that the optimizer can be made aware of its presence and generate an execution plan considering such indexes.

Virtual Index Wizard:

As per Oracle, this functionality is not intended for standalone usage. It is part of the Oracle Enterprise Manger Tuning Pack (Virtual Index Wizard). The virtual index wizard functionality allows the user to test a potential new index prior to actually building the new index in the database. It allows the CBO to assess the potential new index for a selected SQL statement by building an explain plan that is aware of the potential new index. This allows the user to determine if the optimizer would use the index, once implemented.

Therefore, the feature is here to be supported from Enterprise Manager and not for standalone usage. I went a bit further and actually tested it using SQL*Plus, basically, trying to use the same feature but without the enterprise manager.

Using Virtual Indexes:

I do not see much use of Virtual Indexes in a development area where we can create and drop indexes while testing. However, this feature could prove handy if a query or group of queries has to be tested in production (for want of simulation or urgency!), to determine if a new index will improve the performance, without impacting existing or new sessions.

Virtual Indexes are Permanent:

Page 117: Oracle Important Notes

These are permanent and continue to exist unless we drop them.

Effect of Virtual Indexes:

Their creation will not have an effect on existing and new sessions. Only sessions marked for Virtual Index usage will become aware of their existence.

Hidden Parameter:

Such indexes will be used only when the hidden parameter "_use_nosegment_indexes" is set to true.

Recognition of Virtual Indexes:

The Rule based optimizer did not recognize Virtual Indexes when I tested, however, CBO recognizes them. In all of my examples, I have used CBO. However, I did not carry out rigorous testing in RBO and you may come across exceptions to this view.

Dictionary View:

Dictionary view DBA_SEGMENTS will not show an entry for Virtual Indexes. The table DBA_INDEXES and DBA_OBJECTS will have an entry for them in Oracle 8i; in Oracle 9i onwards, DBA_INDEXES no longer show Virtual Indexes.

Alteration:

Virtual Indexes cannot be altered and throw a "fake index" error!

ANALYZE Command:

Virtual Indexes can be analyzed, using the ANALYZE command or DBMS_STATS package, but the statistics cannot be viewed (in Oracle 8i, DBA_INDEXES will not show this either). Oracle may be generating artificial statistics and storing it somewhere for referring it later.

Creating Virtual Index:

Creating a Virtual Index can be achieved by using the NOSEGMENT clause with the CREATE INDEX command.

QL> create unique index am304_u1 on am304(col2) nosegment;   Index created.

Using Virtual Index:

Setting the "_use_nosegment_indexes" parameter enables the optimizer to use virtual indexes. This is a hidden/internal parameter and therefore undocumented. Such parameters should not be altered for Oracle databases unless Oracle Support either advises or recommends that you do so. In our case, we make an exception (!), but only to be set at session level. Do not set it for the complete instance.

Page 118: Oracle Important Notes

Virtual Index: Creation

QL> create index am301_n1 on am301(col1) nosegment;   Index created.

Virtual Index: Checking Dictionary tables

QL> select segment_name, segment_type, bytes 2 from dba_segments 3 where segment_name = 'AM301_N1';  

no rows selected  

SQL> select object_name, object_type, status 2 from dba_objects 3 where object_name = 'AM301_N1';  

OBJECT_NAME |OBJECT_TYPE |STATUS ------------------|-----------------|--------------- AM301_N1 |INDEX |VALID     SQL> select index_name, index_type, status 2 from dba_indexes 3 where index_name = 'AM301_N1'; INDEX_NAME |INDEX_TYPE |STATUS ------------------------------|------------|--------------- AM301_N1 |NORMAL |VALID

Virtual Index: Use

QL> create table am301 2 (col1 number, col2 varchar2(20));   Table created.   SQL> insert into am301 values(dbms_random.random, dbms_random.string('A', 20));   1 row created. SQL> insert into am301 values(dbms_random.random, dbms_random.string('A', 20));   1 row created.   SQL> insert into am301 values(dbms_random.random, dbms_random.string('A', 20));   1 row created.   SQL> insert into am301 values(dbms_random.random, dbms_random.string('A', 20));   1 row created.   SQL> select * from am301;   COL1 COL2 ---------- -------------------- 512599724 aCR_PdFVdSGJLCOLCjJQ -2.049E+09 qiVUlonc^p^R_X_W_^Xn -189009085 prTNPqPUod^miAnLXrMA 2082093299 Bq_icbmcpNFNUKDRdMi]   --

Though inserting alpha-numeric, Oracle also inserted --special characters in col2. This one is a bug and is --resolved in 9i+.   SQL> insert into am301 select * from am301;   4 rows created.   SQL> insert into am301 select * from am301;   8 rows created.   SQL> insert into am301 select * from am301;   16 rows created.   SQL> alter session set optimizer_mode=first_rows;   Session altered.   SQL> create index am301_n1 on am301(col1) nosegment;   Index created.  

Page 119: Oracle Important Notes

SQL> analyze table am301 compute statistics;   Table analyzed.   SQL> analyze index am301_n1 compute statistics;   Index analyzed.   --

It is recommended to use dbms_stats package to --generate statistics. Analyze is used here as an --example only.   SQL> set autotrace on   SQL> select * from am301 where col1 = 2082093299;   COL1 COL2 ---------- -------------------- 2082093299 Bq_icbmcpNFNUKDRdMi] 2082093299 Bq_icbmcpNFNUKDRdMi] ..   32 rows selected.   Execution Plan----------------------------------------------------------0 SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=1 Card=32 Bytes=864) 1 0 TABLE ACCESS (FULL) OF 'AM301' (Cost=1 Card=32 Bytes=864)     SQL> alter session set "_use_nosegment_indexes" = true; -- set only for the session testing it.   Session altered.   SQL> select * from am301 where col1 = 2082093299;   COL1 COL2 ---------- -------------------- 2082093299 Bq_icbmcpNFNUKDRdMi] 2082093299 Bq_icbmcpNFNUKDRdMi]   32 rows selected.  

Execution Plan

----------------------------------------------------------

0 SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=207 Card=32 Bytes=864) 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'AM301' (Cost=207 Card=32 Bytes=864) 2 1 INDEX (RANGE SCAN) OF 'AM301_N1' (NON-UNIQUE) (Cost=7 Card=32)

Virtual Index: Alteration

QL> alter index am301_n1 rebuild; alter index am301_n1 rebuild * ERROR at line 1: ORA-08114: can not alter a fake index     SQL> alter index am301_n1 storage(pctincrease 0); alter index am301_n1 storage(pctincrease 0) * ERROR at line 1: ORA-08114: can not alter a fake index

Virtual Index: Recreation

QL> create index am301_n1 on am301(col1); create index am301_n1 on am301(col1)* ERROR at line 1: ORA-00955: name is already used by an existing object

Page 120: Oracle Important Notes

As the Virtual Index has an entry in some of the dictionary tables, it will prevent the creation of an object with the same name. The alternative is to drop and recreate the Virtual Index as a real index.

Virtual Index: Drop and Recreate SQL> drop index am301_n1;   Index dropped.   SQL> create index am301_n1 on am301(col1);   Index created.

However, a Virtual Index will not put off the creation of an index with the same column(s).

In the example below, a Virtual Index is created with name DUMMY, afterwards a new index with a different name is created with the same column and structure. Both of the indexes will show in the DBA_OBJECTS listing. SQL> create index dummy on am310(col1, col2, col3) nosegment;   Index created.   SQL> create index am310_n1 on am310(col1, col2, col3);   Index created.

Conclusion

As I mentioned earlier, this is undocumented, so use it at your own risk. The above feature may not be a must-use option, but is a good-to-know fact. Drop the index once you are done with it, without fail! Its presence can baffle some of the regular scripts that are run to monitor the databases.

 ================================================================================

Trying to drop old undo tablespace that needs recoveryMy notes below.Also look/confirm here;

Doc ID:  Note:1013221.6 Subject:  RECOVERING FROM A LOST DATAFILE IN A ROLLBACK TABLESPACE

Doc ID:         Note:28812.1Subject:        Rollback Segment Needs Recovery

Chris MarquezOracle DBA

====================================Drop Rollback or UNDO Tablspace With Active / Corrupt / "NEEDS RECOVERY" Segments====================================------------------------------------The Issue:

Page 121: Oracle Important Notes

------------------------------------

---SQL*PLUSSQL> alter database mount;Database altered.

SQL> alter database open;*ERROR at line 1:ORA-03113: end-of-file on communication channel

        __OR__

---alert.log        Errors in file /o01/app/oracle/admin/report/bdump/report_smon_1295.trc:         ORA-01578: ORACLE data block corrupted (file # 2, block # 192423)        ORA-01110: data file 2: '/o01/oradata/report/undotbs01.dbf'*OR*        Tue May 31 13:56:41 2005        Errors in file /o01/app/oracle/admin/report/bdump/report_smon_1646.trc:        ORA-01595: error freeing extent (16) of rollback segment (4))        ORA-00607: Internal error occurred while making a change to a data block        ORA-00600: internal error code, arguments: [4193], [1088], [992], [], [], [], [], []*OR EVEN*        Sun Jul 17 01:25:56 2005        Errors in file /oracle//bdump/orcl_j001_115070.trc:        ORA-00603: ORACLE server session terminated by fatal error        ORA-00600: internal error code, arguments: [kteuPropTime-2], [], [], [], [], [], [], []

+++++++++++++++++++++++++++++++++++++A. IF YOU CAN STILL OPEN THE DATABASE+++++++++++++++++++++++++++++++++++++------------------------------------UNDO/RBS Seem OK!?------------------------------------col segment_name format a15select segment_name, status from dba_rollback_segs;SEGMENT_NAME    STATUS--------------- ------------------------------------------------SYSTEM          ONLINE _SYSSMU1$       ONLINE _SYSSMU2$       ONLINE ...

------------------------------------Edit init.ora to Comment UNDO/RBS parameters---------------------------------------vi init.ora        #undo_management=AUTO        #undo_tablespace=UNDOTBS        #undo_retention = 18000

Page 122: Oracle Important Notes

------------------------------------UNDO/RBS Issue Obvious now!------------------------------------shutdownstartupcol segment_name format a15select segment_name, status from dba_rollback_segs;SEGMENT_NAME    STATUS--------------- ------------------------------------------------SYSTEM          ONLINE _SYSSMU1$       PARTLY AVAILABLE _SYSSMU2$       OFFLINE ...

+++++++++++++++++++++++++++++++++++++B. IF YOU CAN *NOT* OPEN THE DATABASE+++++++++++++++++++++++++++++++++++++

------------------------------------Edit init.ora to Comment UNDO/RBS parameters & ADD "_smu_debug_mode", event 10015---------------------------------------vi init.ora        #undo_management=AUTO        #undo_tablespace=UNDOTBS        #undo_retention = 18000

        # _smu_debug_mode simply collects diagnostic information for support purposes        _smu_debug_mode=1        # Event 10015 is the undo segment recovery tracing event.        # Use this to identify corrupted rollback/undo segments when a database cannot be started.        event="10015 trace name context forever, level 10"

------------------------------------startup Again------------------------------------SQL> startup nomount pfile=/.../init.ora.UNOD_PARAM;ORACLE instance started.SQL> alter database mount;Database altered.SQL> alter database open;alter database open*ERROR at line 1:ORA-03113: end-of-file on communication channel

------------------------------------View event="10015 trace file for corrupted rollback/undo segments------------------------------------udump/> more orcl_ora_815334.trc....

Page 123: Oracle Important Notes

Recovering rollback segment _SYSSMU2$UNDO SEG (BEFORE RECOVERY): usn = 2  Extent Control Header  -----------------------------------------------------------------

+++++++++++++++++++++++++++++++++++++NOW FIX CORRUPTED SEGMENTS+++++++++++++++++++++++++++++++++++++

------------------------------------Edit init.ora to "force" Rollback or UNDO offline------------------------------------        SQL>select '"'||segment_name||'"'||',' from sys.dba_rollback_segs where tablespace_name = 'UNDOTBS'

---vi init.oraFor example TRADITIONAL ROLLBACK SEGMENTS:        _OFFLINE_ROLLBACK_SEGMENTS=(rbs1,rbs2)        _CORRUPTED_ROLLBACK_SEGMENTS=(rbs1,rbs2)

For example AUM UNDO TABLESPACE (SEGMENTS):        _OFFLINE_ROLLBACK_SEGMENTS=("_SYSSMU8$", "_SYSSMU9$", "_SYSSMU10$", "_SYSSMU30$", "_SYSSMU31$", "_SYSSMU32$", "_SYSSMU34$","_SYSSMU35$")        _CORRUPTED_ROLLBACK_SEGMENTS=("_SYSSMU8$", "_SYSSMU9$", "_SYSSMU10$", "_SYSSMU30$", "_SYSSMU31$", "_SYSSMU32$", "_SYSSMU34$","_SYSSMU35$")

---UNDO/RBS Issue Is Real (bad)!shutdownstartupcol segment_name format a15select segment_name, status from dba_rollback_segs;SEGMENT_NAME    STATUS--------------- ------------------------------------------------SYSTEM          ONLINE _SYSSMU1$       NEEDS RECOVERY_SYSSMU2$       OFFLINE ...

------------------------------------Drop Rollback or UNDO Segments:------------------------------------        SQL>select 'drop rollback segment '||'"'||segment_name||'"'||';' from sys.dba_rollback_segs  where tablespace_name = 'UNDOTBS1'

DROP ROLLBACK SEGMENT rbs1;  DROP ROLLBACK SEGMENT _SYSSMU1$;DROP ROLLBACK SEGMENT _SYSSMU2$;...

---UNDO/RBS All Gone...Easy and Simple to Drop UNDO/RBS Tablespace.shutdownstartup

Page 124: Oracle Important Notes

col segment_name format a15select segment_name, status from dba_rollback_segs;SEGMENT_NAME    STATUS--------------- ------------------------------------------------SYSTEM          ONLINE

1 rows selected.

------------------------------------Drop The Rollback or UNDO Tablespace------------------------------------col FILE_NAME for a60col BYTES for 999,999,999,999,999select FILE_ID, BYTES, FILE_NAME from dba_data_files where TABLESPACE_NAME ='UNDOTBS';   FILE_ID                BYTES FILE_NAME---------- -------------------- ------------------------------------------------------------         2        6,291,456,000 /o01/oradata/report/undotbs01.dbf

SQL>DROP TABLESPACE RBS INCLUDING CONTENTS;

SQL> DROP TABLESPACE UNDOTBS INCLUDING CONTENTS and datafiles;Tablespace dropped.

[oracle@util1 orcl920]$ ls -ltr /o01/oradata/report/undotbs01.dbfls: /o01/oradata/orcl920/undotbs01.dbf: No such file or directory

------------------------------------RE-Create The Rollback or UNDO Tablespace------------------------------------SQL> CREATE UNDO TABLESPACE "UNDOTBS" DATAFILE '/o01/oradata/orcl920/undotbs01.dbf' SIZE 500M REUSE AUTOEXTEND OFF;Tablespace created.

[oracle@util1 orcl920]$ ls -ltr /o01/oradata/report/undotbs01.dbf-rw-r-----    1 oracle   dba      1048584192 May 16 17:50 /o01/oradata/report/undotbs01.dbf

------------------------------------Edit init.ora to Comment _OFFLINE_ROLLBACK_SEGMENTS= and UNcomment "undo_", "rbs" parameters.---------------------------------------vi init.ora        #_OFFLINE_ROLLBACK_SEGMENTS

        undo_management=AUTO        undo_tablespace=UNDOTBS        undo_retention = 18000

Page 125: Oracle Important Notes

---UNDO/RBS Issue GONE!shutdownstartupcol segment_name format a15select segment_name, status from dba_rollback_segs;SEGMENT_NAME    STATUS--------------- ------------------------------------------------SYSTEM          ONLINE_SYSSMU11$      ONLINE_SYSSMU12$      ONLINE...11 rows selected.

---alert.logMon May 16 17:50:02 2005Database Characterset is WE8ISO8859P1replication_dependency_tracking turned off (no async multimaster replication found)Completed: ALTER DATABASE OPEN

-----------------------------------DOCS-----------------------------------Doc ID:  Note:1013221.6 Subject:  RECOVERING FROM A LOST DATAFILE IN A ROLLBACK TABLESPACE

Doc ID:         Note:28812.1Subject:        Rollback Segment Needs Recovery

s

Repair UNDO log corruptionDon Burleson 

In rare cases (usually DBA error) the Oracle UNDO tablespace can become corrupted.

This manifests with this error: ORA-00376: file xx cannot be read at this time

In cases of UNDO log corruption, you must:

   • Change the undo_management parameter from “AUTO” to “MANUAL”   • Create a new UNDO tablespace   • Drop the old UNDO tablespace

Dropping the corrupt UNDO tablespace can be tricky and you may get the message:

   ORA-00376: file string cannot be read at this time

Page 126: Oracle Important Notes

To drop a corrupt UNDO tablespace:

1 – Identify the bad segment: 

select    segment_name,    status from    dba_rollback_segs where    tablespace_name='undotbs_corrupt'and   status = ‘NEEDS RECOVERY’; 

SEGMENT_NAME STATUS------------------------------ ----------------_SYSSMU22$ NEEDS RECOVERY

2. Bounce the instance with the hidden parameter “_offline_rollback_segments”, specifying the bad segment name:

          _OFFLINE_ROLLBACK_SEGMENTS=_SYSSMU22$

3. Bounce database, nuke the corrupt segment and tablespace:

SQL> drop rollback segment "_SYSSMU22$";Rollback segment dropped.

SQL > drop tablespace undotbs including contents and datafiles;Tablespace dropped. 

Page 127: Oracle Important Notes