/***********************************************************************/
/* Document : Oracle 8i,9i,10g queries, information, and tips */ /*
Doc. Versie : 58 */ /* File : oracle9i10g.txt */ /* Date :
23-05-2008 */ /* Content : Just a series of handy DBA queries. */
/* Compiled by : Albert */
/***********************************************************************/
CONTENTS: 0. Common data dictionary queries for sessions, locks,
perfoRMANce etc.. 1. DATA DICTIONARY QUERIES m.b.t. files,
tablespaces, logs: 2. NOTES ON PERFORMANCE: 3. Data dictonary
queries m.b.t perfoRMANce: 4. IMP and EXP, 10g IMPDB and EXPDB, and
SQL*Loader Examples 5. Add, Move AND Size Datafiles,logfiles,
create objects etc..: 6. Install Oracle 92 on Solaris: 7. install
Oracle 9i on Linux: 8. Install Oracle 9.2.0.2 on OpenVMS: 9.
Install Oracle 9.2.0.1 on AIX 9. Installation Oracle 8i - 9i: 10.
CONSTRAINTS: 11. DBMS_JOB and scheduled Jobs: 12. Net8,9,10 /
SQLNet: 13. Datadictionary queries Rollback segments: 14. Data
dictionary queries m.b.t. security, permissions: 15. INIT.ORA
parameters: 16. Snapshots: 17. Triggers: 19. BACKUP RECOVERY,
TROUBLESHOOTING: 20. TRACING: 21. Overig: 22. DBA% and v$ views 23
TUNING: 24 RMAN: 25. UPGRADE AND MIGRATION 26. Some info on Rdb:
27. Some info on IFS 28. Some info on 9iAS rel. 2 29 - 35 9iAS
configurations and troubleshooting 30. BLOBS 31. BLOCK CORRUPTION
32. iSQL*Plus and EM 10g 33. ADDM 34. ASM and 10g RAC 35. CDC and
Streams 36. X$ Tables
==================================================================================
==========
0. QUICK INFO/VIEWS ON SESSIONS, LOCKS, AND UNDO/ROLLBACK
INFORMATION IN A SINGLE INSTANCE:
==================================================================================
=========
SINGLE INSTANCE QUERIES: ======================== --
---------------------------- 0.1 QUICK VIEW ON SESSIONS: --
--------------------------SELECT substr(username, 1, 10), osuser,
sql_address, to_char(logon_time, 'DD-MMYYYY;HH24:MI'), sid,
serial#, command, substr(program, 1, 30), substr(machine, 1, 30),
substr(terminal, 1, 30) FROM v$session; SELECT sql_text,
rows_processed from v$sqlarea where address='' --
------------------------- 0.2 QUICK VIEW ON LOCKS: (use the
sys.obj$ to find ID1:) -- -----------------------First, lets take a
look at some important dictionary views with respect to locks:
SQL> desc v$lock; Name Null? -----------------------------
-------ADDR KADDR SID TYPE ID1 ID2 LMODE REQUEST CTIME BLOCK Type
-------------------RAW(8) RAW(8) NUMBER VARCHAR2(2) NUMBER NUMBER
NUMBER NUMBER NUMBER NUMBER
This view stores all information relating to locks in the
database. The interesting columns in this view are sid (identifying
the session holding or aquiring the lock), type, and the
lmode/request pair. Important possible values of type are TM (DML
or Table Lock), TX (Transaction), MR (Media Recovery), ST (Disk
Space Transaction). Exactly one of the lmode, request pair is
either 0 or 1 while the other indicates the lock mode. If lmode is
not 0 or 1, then the session has aquired the lock, while it waits
to aquire the lock if request is other than 0 or 1. The possible
values for lmode and request are: 1: 2: 3: 4: null, Row Share (SS),
Row Exclusive (SX), Share (S),
5: Share Row Exclusive (SSX) and 6: Exclusive(X) If the lock
type is TM, the column id1 is the object's id and the name of the
object can then be queried like so: select name from sys.obj$ where
obj# = id1 A lock type of JI indicates that a materialized view is
being SQL> desc v$locked_object; Name Null?
----------------------------- -------XIDUSN XIDSLOT XIDSQN
OBJECT_ID SESSION_ID ORACLE_USERNAME OS_USER_NAME PROCESS
LOCKED_MODE SQL> desc dba_waiters; Name Null?
----------------------------- -------WAITING_SESSION
HOLDING_SESSION LOCK_TYPE MODE_HELD MODE_REQUESTED LOCK_ID1
LOCK_ID2 SQL> desc v$transaction; Name Null?
----------------------------- -------ADDR XIDUSN XIDSLOT XIDSQN
UBAFIL UBABLK UBASQN UBAREC STATUS START_TIME START_SCNB START_SCNW
START_UEXT START_UBAFIL START_UBABLK START_UBASQN START_UBAREC
SES_ADDR FLAG SPACE
Type -------------------NUMBER NUMBER NUMBER NUMBER NUMBER
VARCHAR2(30) VARCHAR2(30) VARCHAR2(12) NUMBER
Type -------------------NUMBER NUMBER VARCHAR2(26) VARCHAR2(40)
VARCHAR2(40) NUMBER NUMBER
Type -------------------RAW(8) NUMBER NUMBER NUMBER NUMBER
NUMBER NUMBER NUMBER VARCHAR2(16) VARCHAR2(20) NUMBER NUMBER NUMBER
NUMBER NUMBER NUMBER NUMBER RAW(8) NUMBER VARCHAR2(3)
RECURSIVE NOUNDO PTX NAME PRV_XIDUSN PRV_XIDSLT PRV_XIDSQN
PTX_XIDUSN PTX_XIDSLT PTX_XIDSQN DSCN-B DSCN-W USED_UBLK USED_UREC
LOG_IO PHY_IO CR_GET CR_CHANGE START_DATE DSCN_BASE DSCN_WRAP
START_SCN DEPENDENT_SCN XID PRV_XID PTX_XID
VARCHAR2(3) VARCHAR2(3) VARCHAR2(3) VARCHAR2(256) NUMBER NUMBER
NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER
NUMBER NUMBER NUMBER DATE NUMBER NUMBER NUMBER NUMBER RAW(8) RAW(8)
RAW(8)
Queries you can use in investigating locks:
=========================================== SELECT
XIDUSN,OBJECT_ID,SESSION_ID,ORACLE_USERNAME,OS_USER_NAME,PROCESS
from v$locked_object; SELECT d.OBJECT_ID, substr(OBJECT_NAME,1,20),
l.SESSION_ID, l.ORACLE_USERNAME, l.LOCKED_MODE from v$locked_object
l, dba_objects d where d.OBJECT_ID=l.OBJECT_ID; SELECT ADDR, KADDR,
SID, TYPE, ID1, ID2, LMODE, BLOCK from v$lock; SELECT a.sid,
a.saddr, b.ses_addr, a.username, b.xidusn, b.used_urec, b.used_ublk
FROM v$session a, v$transaction b WHERE a.saddr = b.ses_addr;
SELECT s.sid, l.lmode, l.block, substr(s.username, 1, 10),
substr(s.schemaname, 1, 10), substr(s.osuser, 1, 10),
substr(s.program, 1, 30), s.command FROM v$session s, v$lock l
WHERE s.sid=l.sid;
SELECT p.spid, s.sid, p.addr,s.paddr,substr(s.username, 1, 10),
substr(s.schemaname, 1, 10), s.command,substr(s.osuser, 1, 10),
substr(s.machine, 1, 10) FROM v$session s, v$process p WHERE
s.paddr=p.addr SELECT sid, serial#, command,substr(username, 1,
10), osuser, sql_address,LOCKWAIT, to_char(logon_time,
'DD-MM-YYYY;HH24:MI'), substr(program, 1, 30) FROM v$session;
SELECT sid, serial#, username, LOCKWAIT from v$session;
SELECT v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME,
w.OSUSER, w.TERMINAL FROM v$sess_io v, V$session w WHERE
v.SID=w.SID ORDER BY v.SID; SELECT * from dba_waiters; SELECT
waiting_session, holding_session, lock_type, mode_held FROM
dba_waiters; SELECT p.spid s.sid p.addr, s.paddr,
substr(s.username, 1, 10) substr(s.schemaname, 1, 10) s.command
substr(s.osuser, 1, 10) substr(s.machine, 1, 25) FROM v$session s,
v$process WHERE s.paddr=p.addr ORDER BY p.spid;
unix_spid, sid, username, schemaname, command, osuser, machine
p
Usage of v$session_longops: =========================== SQL>
desc v$session_longops; SID NUMBER Session identifier SERIAL#
NUMBER Session serial number OPNAME VARCHAR2(64) Brief description
of the operation TARGET VARCHAR2(64) The object on which the
operation is carried out TARGET_DESC VARCHAR2(32) Description of
the target SOFAR NUMBER The units of work done so far TOTALWORK
NUMBER The total units of work UNITS VARCHAR2(32) The units of
measurement START_TIME DATE The starting time of operation
LAST_UPDATE_TIME DATE Time when statistics last updated
TIMESTAMP DATE Timestamp TIME_REMAINING NUMBER Estimate (in
seconds) of time remaining for the operation to complete
ELAPSED_SECONDS NUMBER The number of elapsed seconds from the start
of operations CONTEXT NUMBER Context MESSAGE VARCHAR2(512)
Statistics summary message USERNAME VARCHAR2(30) User ID of the
user performing the operation SQL_ADDRESS RAW(4 | 8) Used with the
value of the SQL_HASH_VALUE column to identify the SQL statement
associated with the operation SQL_HASH_VALUE NUMBER Used with the
value of the SQL_ADDRESS column to identify the SQL statement
associated with the operation SQL_ID VARCHAR2(13) SQL identifier of
the SQL statement associated with the operation QCSID NUMBER
Session identifier of the parallel coordinator This view displays
the status of various operations that run for longer than 6 seconds
(in absolute time). These operations currently include many backup
and recovery functions, statistics gathering, and query execution,
and more operations are added for every Oracle release. To monitor
query execution progress, you must be using the cost-based
optimizer and you must: Set the TIMED_STATISTICS or SQL_TRACE
parameter to true Gather statistics for your objects with the
ANALYZE statement or the DBMS_STATS package You can add information
to this view about application-specific long-running operations by
using the DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure.
Select 'long', to_char (l.sid), to_char (l.serial#),
to_char(l.sofar), to_char(l.totalwork), to_char(l.start_time,
'DD-Mon-YYYY HH24:MI:SS' ), to_char ( l.last_update_time ,
'DD-Mon-YYYY HH24:MI:SS'), to_char(l.time_remaining),
to_char(l.elapsed_seconds),
l.opname,l.target,l.target_desc,l.message,s.username,s.osuser,s.lockwait
from v$session_longops l, v$session s where l.sid = s.sid and
l.serial# = s.serial#; Select 'long', to_char (l.sid), to_char
(l.serial#), to_char(l.sofar), to_char(l.totalwork),
to_char(l.start_time, 'DD-Mon-YYYY HH24:MI:SS' ), to_char (
l.last_update_time , 'DD-Mon-YYYY HH24:MI:SS'),
s.username,s.osuser,s.lockwait from v$session_longops l, v$session
s where l.sid = s.sid and l.serial# = s.serial#; select
substr(username,1,15),target,to_char(start_time, 'DD-Mon-YYYY
HH24:MI:SS' ), SOFAR,substr(MESSAGE,1,70) from v$session_longops;
select USERNAME, to_char(start_time, 'DD-Mon-YYYY HH24:MI:SS' ),
substr(message,1,90),to_char(time_remaining) from
v$session_longops;
9i and 10G note: ================ Oracle has a view inside the
Oracle data buffers. The view is called v$bh, and while v$bh was
originally developed for Oracle Parallel Server (OPS), the v$bh
view can be used to show the number of data blocks in the data
buffer for every object type in the database. The following query
is especially exciting because you can now see what objects are
consuming the data buffer caches. In Oracle9i, you can use this
information to segregate tables to separate RAM buffers with
different blocksizes. Here is a sample query that shows data buffer
utilization for individual objects in the database. Note that this
script uses an Oracle9i scalar sub-query, and will not work in
preOracle9i systems unless you comment-out column c3. column column
column column c0 c1 c2 c3 heading heading heading heading 'Owner'
'Object|Name' 'Number|of|Buffers' 'Percentage|of Data|Buffer'
format format format format a15 a30 999,999 999,999,999
select owner c0, object_name c1, count(1) c2, (count(1)/(select
count(*) from v$bh)) *100 c3 from dba_objects o, v$bh bh where
o.object_id = bh.objd and o.owner not in
('SYS','SYSTEM','AURORA$JIS$UTILITY$') group by owner, object_name
order by count(1) desc ; -- ------------------------------ 0.3
QUICK VIEW ON TEMP USAGE: -- ----------------------------select
total_extents, used_extents, total_extents, current_users,
tablespace_name from v$sort_segment; select username, user,
sqladdr, extents, tablespace from v$sort_usage; SELECT
b.tablespace, ROUND(((b.blocks*p.value)/1024/1024),2),
a.sid||','||a.serial# SID_SERIAL, a.username, a.program FROM
sys.v_$session a, sys.v_$sort_usage b, sys.v_$parameter p WHERE
p.name = 'db_block_size' AND a.saddr = b.session_addr ORDER BY
b.tablespace, b.blocks; -- --------------------------------- 0.4
QUICK VIEW ON UNDO/ROLLBACK: --
-------------------------------SELECT FROM WHERE AND
substr(username, 1, 10), substr(terminal, 1, 10), substr(osuser, 1,
10), t.start_time, r.name, t.used_ublk "ROLLB BLKS", log_io, phy_io
sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s t.xidusn
= r.usn t.ses_addr = s.saddr;
SELECT substr(n.name, 1, 10), s.writes, s.gets, s.waits,
s.wraps, s.extents, s.status, s.optsize, s.rssize FROM V$ROLLNAME
n, V$ROLLSTAT s WHERE n.usn=s.usn; SELECT substr(r.name, 1, 10)
"RBS", s.sid, s.serial#, s.taddr, t.addr, substr(s.username, 1, 10)
"USER", t.status, t.cr_get, t.phy_io, t.used_ublk, t.noundo,
substr(s.program, 1, 15) "COMMAND" FROM sys.v_$session s,
sys.v_$transaction t, sys.v_$rollname r WHERE t.addr = s.taddr AND
t.xidusn = r.usn ORDER BY t.cr_get, t.phy_io; SELECT
substr(segment_name, 1, 20), substr(tablespace_name, 1, 20),
status, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS,
PCT_INCREASE FROM DBA_ROLLBACK_SEGS; select 'FREE',count(*) from
sys.fet$ union select 'USED',count(*) from sys.uet$; -- Quick view
active transactions SELECT NAME, XACTS "ACTIVE TRANSACTIONS" FROM
V$ROLLNAME, V$ROLLSTAT WHERE V$ROLLNAME.USN = V$ROLLSTAT.USN;
SELECT to_char(BEGIN_TIME, 'DD-MM-YYYY;HH24:MI'), to_char(END_TIME,
'DD-MMYYYY;HH24:MI'), UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY
AS "MAXCON" FROM V$UNDOSTAT WHERE trunc(BEGIN_TIME)=trunc(SYSDATE);
select TO_CHAR(MIN(Begin_Time),'DD-MON-YYYY HH24:MI:SS') "Begin
Time", TO_CHAR(MAX(End_Time),'DD-MON-YYYY HH24:MI:SS') "End
Time",
SUM(Undoblks) "Total Undo Blocks Used", SUM(Txncount) "Total Num
Trans Executed", MAX(Maxquerylen) "Longest Query(in secs)",
MAX(Maxconcurrency) "Highest Concurrent TrCount", SUM(Ssolderrcnt),
SUM(Nospaceerrcnt) from V$UNDOSTAT; SELECT used_urec FROM v$session
s, v$transaction t WHERE s.audsid=sys_context('userenv',
'sessionid') and s.taddr = t.addr; (used_urec = Used Undo records)
SELECT a.sid, a.username, b.xidusn, b.used_urec, b.used_ublk FROM
v$session a, v$transaction b WHERE a.saddr = b.ses_addr; SELECT
v.SID, v.BLOCK_GETS, v.BLOCK_CHANGES, w.USERNAME, w.OSUSER,
w.TERMINAL FROM v$sess_io v, V$session w WHERE v.SID=w.SID ORDER BY
v.SID;
-- --------------------------------- 0.5 SOME EXPLANATIONS: --
--------------------------------
-- explanation of "COMMAND": 1: CREATE TABLE 2: INSERT 3: SELECT
4: CREATE CLUSTER 5: ALTER CLUSTER 6: UPDATE 7: DELETE 8: DROP
CLUSTER 9: CREATE INDEX 10: DROP INDEX 11: ALTER INDEX 12: DROP
TABLE 13: CREATE SEQUENCE 14: ALTER SEQUENCE 15: ALTER TABLE 16:
DROP SEQUENCE 17: GRANT 18: REVOKE 19: CREATE SYNONYM 20: DROP
SYNONYM 21: CREATE VIEW 22: DROP VIEW 23: VALIDATE INDEX 24: CREATE
PROCEDURE 25: ALTER PROCEDURE 26: LOCK TABLE 27: NO OPERATION 28:
RENAME 29: COMMENT 30: AUDIT 31: NOAUDIT 32: CREATE DATABASE LINK
33: DROP DATABASE LINK 34: CREATE DATABASE 35: ALTER DATABASE 36:
CREATE ROLLBACK SEGMENT 37: ALTER ROLLBACK SEGMENT 38: DROP
ROLLBACK SEGMENT 39: CREATE TABLESPACE 40: ALTER TABLESPACE 41:
DROP TABLESPACE 42: ALTER SESSION 43: ALTER USE 44: COMMIT 45:
ROLLBACK 46: SAVEPOINT 47: PL/SQL EXECUTE 48: SET TRANSACTION 49:
ALTER SYSTEM SWITCH LOG 50: EXPLAIN 51: CREATE USER 25: CREATE ROLE
53: DROP USER 54: DROP ROLE 55: SET ROLE 56: CREATE SCHEMA 57:
CREATE CONTROL FILE 58: ALTER TRACING 59: CREATE TRIGGER 60: ALTER
TRIGGER 61: DROP TRIGGER 62: ANALYZE TABLE 63: ANALYZE INDEX 64:
ANALYZE CLUSTER 65: CREATE PROFILE 66: DROP PROFILE 67: ALTER
PROFILE 68: DROP PROCEDURE 69: DROP PROCEDURE 70: ALTER RESOURCE
COST 71: CREATE SNAPSHOT LOG 72: ALTER SNAPSHOT LOG 73: DROP
SNAPSHOT LOG 74: CREATE SNAPSHOT 75: ALTER SNAPSHOT 76: DROP
SNAPSHOT 79: ALTER ROLE 85: TRUNCATE TABLE 86:
TRUNCATE COUSTER 88: ALTER VIEW 91: CREATE FUNCTION 92: ALTER
FUNCTION 93: DROP FUNCTION 94: CREATE PACKAGE 95: ALTER PACKAGE 96:
DROP PACKAGE 97: CREATE PACKAGE BODY 98: ALTER PACKAGE BODY 99:
DROP PACKAGE BODY -- explanation of locks: Locks: 0, 'None', /* Mon
Lock equivalent */ 1, 'Null', /* N */ 2, 'Row-S (SS)', /* L */ 3,
'Row-X (SX)', /* R */ 4, 'Share', /* S */ 5, 'S/Row-X (SRX)', /* C
*/ 6, 'Exclusive', /* X */ to_char(b.lmode) TX: enqueu, waiting TM:
DDL on object MR: Media Recovery A TX lock is acquired when a
transaction initiates its first change and is held until the
transaction does a COMMIT or ROLLBACK. It is used mainly as a
queuing mechanism so that other sessions can wait for the
transaction to complete. TM Per table locks are acquired during the
execution of a transaction when referencing a table with a DML
statement so that the object is not dropped or altered during the
execution of the transaction, if and only if the dml_locks
parameter is non-zero. LOCKS: locks op user objects, zoals tables
en rows LATCH: locks op system objects, zoals shared data
structures in memory en data dictionary rows LOCKS - shared of
exclusive LATCH - altijd exclusive UL= user locks, geplaats door
programmatuur m.b.v. bijvoorbeeld DBMS_LOCK package DML LOCKS: data
manipulatie: table lock, row lock DDL LOCKS: preserves de struktuur
van object (geen simulane DML, DDL statements) DML locks: row lock
(TX): voor rows (insert, update, delete) row lock plus table lock:
row lock, maar ook voorkomen DDL statements table lock (TM):
automatisch bij insert, update, delete, ter voorkoming DDL op table
table lock: S: share lock RS: row share RSX: row share exlusive RX:
row exclusive X: exclusive (ANDere tansacties kunnen alleen
SELECT..)
in V$LOCK lmode column: 0, 1, 2, 3, 4, 5, 6, None Null (NULL)
Row-S (SS) Row-X (SX) Share (S) S/Row-X (SSX) Exclusive (X)
Internal Implementation of Oracle Locks (Enqueue) Oracle server
uses locks to provide concurrent access to shared resources whereas
it uses latches to provide exclusive and short-term access to
memory structures inside the SGA. Latches also prevent more than
one process to execute the same piece of code, which other process
might be executing. Latch is also a simple lock, which provides
serialize and only exclusive access to the memory area in SGA.
Oracle doesnt use latches to provide shared access to resources
because it will increase CPU usage. Latches are used for big memory
structure and allow operations required for locking the sub
structures. Shared resources can be tables, transactions, redo
threads, etc. Enqueue can be local or global. If it is a single
instance then enqueues will be local to that instance. There are
global enqueus also like ST enqueue, which is held before any space
transaction can be occurred on any tablespace in RAC. ST enqueues
are held only for dictionary-managed tablespaces. These oracle
locks are generally known as Enqueue, because whenever there is a
session request for a lock on any shared resource structure, it's
lock data structure is queued to one of the linked list attached to
that resource structure (Resource structure is discussed later).
Before proceeding further with this topic, here is little brief
about Oracle locks. Oracle locks can be applied to compound and
simple objects like tables and the cache buffer. Locks can be held
in different modes like shared, excusive, null, sub-shared,
sub-exclusive and shared sub-exclusive. Depending on the type of
object, different modes are applied. Foe example, a compound object
like a table with rows, all above mentioned modes could be
applicable whereas for simple objects only the first three will be
applicable. These lock modes dont have any importance of their own
but the importance is how they are being used by the subsystem.
These lock modes (compatibility between locks) define how the
session will get a lock on that object.
-- Explanation of Waits: SQL> desc v$system_event; Name
-----------------------EVENT TOTAL_WAITS TOTAL_TIMEOUTS
TIME_WAITED AVERAGE_WAIT TIME_WAITED_MICRO v$system_event This view
displays the count (total_waits) of all wait events since startup
of the instance. If timed_statistics is set to true, the sum of the
wait times for all events are also displayed in the column
time_waited. The unit of time_waited is one hundreth of a second.
Since 10g, an additional column (time_waited_micro) measures wait
times in millionth of a second. total_waits where event='buffer
busy waits' is equal the sum of count in v$waitstat. v$enqueue_stat
can be used to break down waits on the enqueue wait event. While
this view totals all events in an instance, v$session select event,
total_waits, time_waited from v$system_event where event like
'%file%' Order by total_waits desc; column column column column
column c1 c2 c3 c4 c5 heading heading heading heading heading
'Event|Name' 'Total|Waits' 'Seconds|Waiting' 'Total|Timeouts'
'Average|Wait|(in secs)' format format format format format a30
999,999,999 999,999 999,999,999 99.999
ttitle 'System-wide Wait Analysis|for current wait events'
select event c1, total_waits c2, time_waited / 100 c3,
total_timeouts c4, average_wait /100 c5 from sys.v_$system_event
where event not in ( 'dispatcher timer', 'lock element cleanup',
'Null event', 'parallel query dequeue wait', 'parallel query idle
wait - Slaves', 'pipe get', 'PL/SQL lock timer', 'pmon timer',
'rdbms ipc message', 'slave wait', 'smon timer', 'SQL*Net
break/reset to client', 'SQL*Net message from client',
) AND event not like 'DFS%' and event not like '%done%' and
event not like '%Idle%' AND event not like 'KXFX%' order by c2 desc
;
'SQL*Net message to client', 'SQL*Net more data to client',
'virtual circuit status', 'WMON goes to sleep'
Create table beg_system_event as select * from v$system_event
Run workload through system or user task Create table
end_system_event as select * from v$system_event Issue SQL to
determine true wait events drop table beg_system_event; drop table
end_system_event; SELECT b.event, (e.total_waits - b.total_waits)
total_waits, (e.total_timeouts - b.total_timeouts) total_timeouts,
(e.time_waited - b.time_waited) time_waited FROM beg_system_event
b, end_system_event e WHERE b.event = e.event; Cumulative info,
after startup: ------------------------------SELECT * FROM
v$system_event WHERE event = 'enqueue';
SELECT * FROM v$sysstat WHERE class=4; select
c.name,a.addr,a.gets,a.misses,a.sleeps,
a.immediate_gets,a.immediate_misses,a.wait_time, b.pid from v$latch
a, v$latchholder b, v$latchname c where a.addr = b.laddr(+) and
a.latch# = c.latch# order by a.latch#; --
----------------------------------------------------------------
0.6. QUICK INFO ON HIT RATIO, SHARED POOL etc.. --
----------------------------------------------------------------
Hit ratio: SELECT FROM WHERE
(1-(pr.value/(dbg.value+cg.value)))*100 v$sysstat pr, v$sysstat
dbg, v$sysstat cg pr.name = 'physical reads'
AND AND
dbg.name = 'db block gets' cg.name = 'consistent gets';
SELECT * FROM V$SGA; -- free memory shared pool: SELECT * FROM
v$sgastat WHERE name = 'free memory'; -- hit ratio shared pool:
SELECT gethits,gets,gethitratio FROM v$librarycache WHERE namespace
= 'SQL AREA'; SELECT SUM(PINS) "EXECUTIONS", SUM(RELOADS) "CACHE
MISSES WHILE EXECUTING" FROM V$LIBRARYCACHE; SELECT
sum(sharable_mem) FROM v$db_object_cache; -- finding literals in
SP: SELECT substr(sql_text,1,50) "SQL", count(*) , sum(executions)
"TotExecs" FROM v$sqlarea WHERE executions < 5 GROUP BY
substr(sql_text,1,50) HAVING count(*) > 30 ORDER BY 2; --
---------------------------------------- 0.7 Quick Table and object
information -- --------------------------------------SELECT
distinct substr(t.owner, 1, 25), substr(t.table_name,1,50),
substr(t.tablespace_name,1,20), t.chain_cnt, t.logging,
s.relative_fno FROM dba_tables t, dba_segments s WHERE t.owner not
in ('SYS','SYSTEM',
'OUTLN','DBSNMP','WMSYS','ORDSYS','ORDPLUGINS','MDSYS','CTXSYS','XDB')
AND t.table_name=s.segment_name AND s.segment_type='TABLE' AND
s.segment_name like 'CI_PAY%'; SELECT substr(segment_name, 1, 30),
segment_type, substr(owner, 1, 10), extents, initial_extent,
next_extent, max_extents FROM dba_segments WHERE extents >
max_extents - 100 AND owner not in ('SYS','SYSTEM'); SELECT FROM
WHERE and segment_name, owner, tablespace_name, extents
dba_segments owner='SALES' -- you use the correct schema here
extents > 700;
SELECT owner, substr(object_name, 1, 30), object_type, created,
last_ddl_time, status FROM dba_objects where OWNER='RM_LIVE'; WHERE
created > SYSDATE-1; SELECT owner, substr(object_name, 1, 30),
object_type, created, last_ddl_time, status FROM dba_objects WHERE
status='INVALID'; Compare 2 owners: ----------------select
table_name from dba_tables where owner='MIS_OWNER' and table_name
not in (SELECT table_name from dba_tables where OWNER='MARPAT');
Table and column information: ----------------------------select
substr(table_name, 1, 3) schema , table_name , column_name ,
substr(data_type,1 ,1) data_type from user_tab_columns where
COLUMN_NAME='ENV_ID' where table_name like 'ALG%' or table_name
like 'STG%' or table_name like 'ODS%' or table_name like 'DWH%' or
table_name like 'MKM%' order by decode(substr(table_name, 1, 3),
'ALG', 10, 'STG', 20, 'ODS', 30, 'DWH', 40, 'MKM', 50, 60) ,
table_name , column_id Check on existence of JServer:
-----------------------------select count(*) from all_objects where
object_name = 'DBMS_JAVA'; should return a count of 3 --
--------------------------------------- 0.8 QUICK INFO ON PRODUCT
INFORMATION: -- -------------------------------------ersa SELECT *
FROM PRODUCT_COMPONENT_VERSION; SELECT * FROM
NLS_DATABASE_PARAMETERS; SELECT * FROM NLS_SESSION_PARAMETERS;
SELECT * FROM NLS_INSTANCE_PARAMETERS; SELECT * FROM V$OPTION;
SELECT * FROM V$LICENSE;
SELECT * FROM V$VERSION; Oracle RDBMS releases:
---------------------9.2.0.1 is the terminal release for Oracle 9i.
Rel 2. Normally it's patched to 9.2.0.4. As from october patches
9.2.0.5 and little later 9.2.0.6 were available 9.2.0.4 is patch ID
3095277. 9.0.1.4 8.1.7 8.0.6 7.3.4 is is is is the the the the
terminal terminal terminal terminal release release release release
for for for for Oracle 9i Oracle8i. Oracle8. Oracle7. Rel. 1.
Additional patchsets exists. Additional patchsets exists.
Additional patchsets exists.
IS ORACLE 32BIT or 64BIT? ------------------------Starting with
version 8, Oracle began shipping 64bit versions of it's RDBMS
product on UNIX platforms that support 64bit software. IMPORTANT:
64bit Oracle can only be installed on Operating Systems that are
64bit enabled. In general, if Oracle is 64bit, '64bit' will be
displayed on the opening banners of Oracle executables such as
'svrmgrl', 'exp' and 'imp'. It will also be displayed in the
headers of Oracle trace files. Otherwise if '64bit' is not display
at these locations, it can be assumed that Oracle is 32bit. or From
the OS level: will be indicated. % cd $ORACLE_HOME/bin % file
oracle ...if 64bit, '64bit'
To verify the wordsize of a downloaded patchset:
-----------------------------------------------The filename of the
downloaded patchset usually dictates which version and wordsize of
Oracle it should be applied against. For instance:
p1882450_8172_SOLARIS64.zip is the 8.1.7.2 patchset for 64bit
Oracle on Solaris. Also refer to the README that is included with
the patch or patch set and this Note: Win2k Server Certifications:
---------------------------OS Product Certified With Version Status
Addtl. Info. Components Other Install Issue 2000 10g N/A N/A
Certified Yes None None None 2000 9.2 32-bit -Opteron N/A N/A
Certified Yes None None None 2000 9.2 N/A N/A Certified Yes None
None None 2000 9.0.1 N/A N/A Desupported Yes None N/A N/A 2000
8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 2000 8.1.6 (8i) N/A
N/A Desupported Yes None N/A N/A 2000, Beta 3 8.1.5 (8i) N/A N/A
Withdrawn Yes N/A N/A N/A
Solaris Server certifications:
-----------------------------Server Certifications OS Product
Certified With Version Status Addtl. Info. Components Other Install
Issue 9 10g 64-bit N/A N/A Certified Yes None None None 8 10g
64-bit N/A N/A Certified Yes None None None 10 10g 64-bit N/A N/A
Projected None N/A N/A N/A 9 9.2 64-bit N/A N/A Certified Yes None
None None 8 9.2 64-bit N/A N/A Certified Yes None None None 10 9.2
64-bit N/A N/A Projected None N/A N/A N/A 2.6 9.2 N/A N/A Certified
Yes None None None 9 9.2 N/A N/A Certified Yes None None None 8 9.2
N/A N/A Certified Yes None None None 7 9.2 N/A N/A Certified Yes
None None None 10 9.2 N/A N/A Projected None N/A N/A N/A 9 9.0.1
64-bit N/A N/A Desupported Yes None N/A N/A 8 9.0.1 64-bit N/A N/A
Desupported Yes None N/A N/A 2.6 9.0.1 N/A N/A Desupported Yes None
N/A N/A 9 9.0.1 N/A N/A Desupported Yes None N/A N/A 8 9.0.1 N/A
N/A Desupported Yes None N/A N/A 7 9.0.1 N/A N/A Desupported Yes
None N/A N/A 9 8.1.7 (8i) 64-bit N/A N/A Desupported Yes None N/A
N/A 8 8.1.7 (8i) 64-bit N/A N/A Desupported Yes None N/A N/A 2.6
8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A 9 8.1.7 (8i) N/A
N/A Desupported Yes None N/A N/A 8 8.1.7 (8i) N/A N/A Desupported
Yes None N/A N/A 7 8.1.7 (8i) N/A N/A Desupported Yes None N/A N/A
everything below: desupported Oracle clients: --------------Server
Version Client Version 10.1.0 10.1.0 Yes Yes 9.2.0 Yes Yes Was
9.0.1 Was Was Was 8.1.7 Yes Yes Was 8.1.6 No No Was 8.1.5 No No No
8.0.6 No Was Was 8.0.5 No No No 7.3.4 No Was Was 9.2.0 Was Yes Was
Yes Was Was Was Was Was 9.0.1 8.1.7 Yes #2 No No Was No Was Was Was
Was Was Was Was Was Was Was Was Was 8.1.6 No Was Was Was Was Was
Was Was Was 8.1.5 No No No Was Was Was Was Was Was 8.0.6 8.0.5
7.3.4 No No No No #1 Was Was Was Was Was Was Was
-- ------------------------------------------------------ 0.9
QUICK INFO WITH REGARDS LOGS AND BACKUP RECOVERY: --
----------------------------------------------------SELECT * from
V$BACKUP; SELECT file#, substr(name, 1, 30), status,
checkpoint_change# controlfile -- uit
FROM V$DATAFILE; SELECT d.file#, d.status, d.checkpoint_change#,
b.status, b.CHANGE#, to_char(b.TIME,'DD-MM-YYYY;HH24:MI'),
substr(d.name, 1, 40) FROM V$DATAFILE d, V$BACKUP b WHERE
d.file#=b.file#; SELECT file#, substr(name, 1, 30), status, fuzzy,
checkpoint_change# file header FROM V$DATAFILE_HEADER; -- uit
SELECT first_change#, next_change#, sequence#, archived,
substr(name, 1, 40), COMPLETION_TIME, FIRST_CHANGE#, FIRST_TIME
FROM V$ARCHIVED_LOG WHERE COMPLETION_TIME > SYSDATE -2; SELECT
recid, first_change#, sequence#, next_change# FROM V$LOG_HISTORY;
SELECT resetlogs_change#, checkpoint_change#, controlfile_change#,
open_resetlogs FROM V$DATABASE; SELECT * FROM V$RECOVER_FILE --
Which file needs recovery
--
-----------------------------------------------------------------------------
0.10 QUICK INFO WITH REGARDS TO TABLESPACES, DATAFILES, REDO
LOGFILES etc..: --
------------------------------------------------------------------------------
online redo log informatie: V$LOG, V$LOGFILE: SELECT l.group#,
l.members, l.status, l.bytes, substr(lf.member, 1, 50) FROM V$LOG
l, V$LOGFILE lf WHERE l.group#=lf.group#; SELECT THREAD#,
SEQUENCE#, FIRST_CHANGE#, FIRST_TIME, to_char(FIRST_TIME,
'DD-MM-YYYY;HH24:MI') FROM V$LOG_HISTORY; -- WHERE SEQUENCE# SELECT
GROUP#, ARCHIVED, STATUS FROM V$LOG; -- tablespace free-used:
SELECT Total.name "Tablespace Name", Free_space,
(total_space-Free_space) Used_space, total_space FROM (SELECT
tablespace_name, sum(bytes/1024/1024) Free_Space FROM
sys.dba_free_space GROUP BY tablespace_name ) Free, (SELECT b.name,
sum(bytes/1024/1024) TOTAL_SPACE FROM sys.v_$datafile a,
sys.v_$tablespace B WHERE a.ts# = b.ts# GROUP BY b.name ) Total
WHERE Free.Tablespace_name = Total.name;
SELECT substr(file_name, 1, 70), tablespace_name FROM
dba_data_files; -----------------------------------------------
0.11 AUDIT Statements:
---------------------------------------------select v.sql_text,
v.FIRST_LOAD_TIME, v.PARSING_SCHEMA_ID, v.DISK_READS,
v.ROWS_PROCESSED, v.CPU_TIME, b.username from v$sqlarea v,
dba_users b where v.FIRST_LOAD_TIME > '2008-05-12' and
v.PARSING_SCHEMA_ID=b.user_id order by v.FIRST_LOAD_TIME ;
------------------------------------------------ 0.12 EXAMPLE OF
DYNAMIC SQL: ----------------------------------------------select
'UPDATE '||t.table_name||' SET '||c.column_name||'=REPLACE('||
c.column_name||','''',CHR(7));' from user_tab_columns c,
user_tables t where c.table_name=t.table_name and t.num_rows>0
and c.DATA_LENGTH>10 and data_type like '%CHAR%' ORDER BY
t.table_name desc; create public synonym EMPLOYEE for
HARRY.EMPLOYEE; select 'create public synonym '||table_name||' for
CISADM.'||table_name||';' from dba_tables where owner='CISADM';
select 'GRANT SELECT, INSERT, UPDATE, DELETE ON '||table_name||' TO
CISUSER;' from dba_tables where owner='CISADM'; select 'GRANT
SELECT ON '||table_name||' TO CISREAD;' from dba_tables where
owner='CISADM';
------------------------------------------------ 0.13 ORACLE
MOST COMMON DATATYPES:
-----------------------------------------------
Example: number as integer in comparison to smallint
----------------------------------------------------
SQL> create table a 2 (id number(3)); Table created. SQL>
create table b 2 (id smallint); Table created. SQL> create table
c 2 (id integer); Table created. SQL> insert into a 2 values 3
(5); 1 row created. SQL> insert into a 2 values 3 (999); 1 row
created. SQL> insert into a 2 values 3 (1001); (1001) * ERROR at
line 3: ORA-01438: value larger than specified precision allowed
for this column
SQL> insert into b 2 values 3 (5); 1 row created. SQL>
insert into b 2 values 3 (99); 1 row created. SQL> insert into b
2 values 3 (999); 1 row created. SQL> insert into b
2 3
values (1001);
1 row created. SQL> insert into b 2 values 3 (65536); 1 row
created. SQL> insert into b 2 values 3 (1048576); 1 row created.
SQL> insert into b 2 values 3 (1099511627776); 1 row created.
SQL> insert into b 2 values 3 (9.5); 1 row created. SQL>
insert into b 2 values 3 (100.23); 1 row created. SQL> select *
from b; ID ---------5 99 999 1001 65536 1048576 1.0995E+12 10 100 9
rows selected.
smallint is really not that "small". Actually its float(38).
SQL> insert into c 2 values 3 (5); 1 row created. SQL>
insert into c 2 values 3 (9999); 1 row created. SQL> insert into
c 2 values 3 (92.7); 1 row created. SQL> insert into c 2 values
3 (1099511627776); 1 row created. SQL> select * from c; ID
---------5 9999 93 1.0995E+12
======================== 1. NOTES ON PERFORMANCE:
========================= 1.1 POOLS: ========== -- SHARED POOL: --
-----------A literal SQL statement is considered as one which uses
literals in the predicate/s rather than bind variables where the
value of the literal is likely to differ between various executions
of the statement. Eg 1: SELECT * FROM emp WHERE ename='CLARK'; is
used by the application instead of
SELECT * FROM emp WHERE ename=:bind1; SQL statement for this
article as it can be shared. -- Hard Parse If a new SQL statement
is issued which does not exist in the shared pool then this has to
be parsed fully. Eg: Oracle has to allocate memory for the
statement from the shared pool, check the statement syntactically
and semantically etc... This is referred to as a hard parse and is
very expensive in both terms of CPU used and in the number of latch
gets performed. --Soft Parse If a session issues a SQL statement
which is already in the shared pool AND it can use an existing
version of that statement then this is known as a 'soft parse'. As
far as the application is concerned it has asked to parse the
statement. if two statements are textually identical but cannot be
shared then these are called 'versions' of the same statement. If
Oracle matches to a statement with many versions it has to check
each version in turn to see if it is truely identical to the
statement currently being parsed. Hence high version counts are
best avoided. The best approach to take is that all SQL should be
sharable unless it is adhoc or infrequently used SQL where it is
important to give CBO as much information as possible in order for
it to produce a good execution plan. --Eliminating Literal SQL If
you have an existing application it is unlikely that you could
eliminate all literal SQL but you should be prepared to eliminate
some if it is causing problems. By looking at the V$SQLAREA view it
is possible to see which literal statements are good candidates for
converting to use bind variables. The following query shows SQL in
the SGA where there are a large number of similar statements:
SELECT substr(sql_text,1,40) "SQL", count(*) , sum(executions)
"TotExecs" FROM v$sqlarea WHERE executions < 5 GROUP BY
substr(sql_text,1,40) HAVING count(*) > 30 ORDER BY 2; The
values 40,5 and 30 are example values so this query is looking for
different statements whose first 40 characters are the same which
have only been executed a few times each and there are at least 30
different occurrances in the shared pool. This query uses the idea
it is common for literal statements to begin "SELECT col1,col2,col3
FROM table WHERE ..." with the leading portion of each statement
being the same. --Avoid Invalidations
Some specific orders will change the state of cursors to
INVALIDATE. These orders modify directly the context of related
objects associated with cursors. That's orders are TRUNCATE,
ANALYZE or DBMS_STATS.GATHER_XXX on tables or indexes, grants
changes on underlying objects. The associated cursors will stay in
the SQLAREA but when it will be reference next time, it should be
reloaded and reparsed fully, so the global performance will be
impacted. The following query could help us to better identify the
concerned cursors: SELECT substr(sql_text, 1, 40) "SQL",
invalidations from v$sqlarea order by invalidations DESC; --
CURSOR_SHARING parameter (8.1.6 onwards) is a new parameter
introduced in Oracle8.1.6. It should be used with caution in this
release. If this parameter is set to FORCE then literals will be
replaced by system generated bind variables where possible. For
multiple similar statements which differ only in the literals used
this allows the cursors to be shared even though the application
supplied SQL uses literals. The parameter can be set dynamically at
the system or session level thus: ALTER SESSION SET cursor_sharing
= FORCE; or ALTER SYSTEM SET cursor_sharing = FORCE; or it can be
set in the init.ora file. Note: As the FORCE setting causes system
generated bind variables to be used in place of literals, a
different execution plan may be chosen by the cost based optimizer
(CBO) as it no longer has the literal values available to it when
costing the best execution plan. In Oracle9i, it is possible to set
CURSOR_SHARING=SIMILAR. SIMILAR causes statements that may differ
in some literals, but are otherwise identical, to share a cursor,
unless the literals affect either the meaning of the statement or
the degree to which the plan is optimized. This enhancement
improves the usability of the parameter for situations where FORCE
would normally cause a different, undesired execution plan. With
CURSOR_SHARING=SIMILAR, Oracle determines which literals are "safe"
for substitution with bind variables. This will result in some SQL
not being shared in an attempt to provide a more efficient
execution plan. -- SESSION_CACHED_CURSORS parameter is a numeric
parameter which can be set at instance level or at session level
using the command: ALTER SESSION SET session_cached_cursors = NNN;
The value NNN determines how many 'cached' cursors there can be in
your session. Whenever a statement is parsed Oracle first looks at
the statements pointed to by your private session cache if a
sharable version of the statement exists it can be used. This
provides a shortcut access to frequently parsed statements that
uses less CPU and uses far fewer latch gets than a soft or hard
parse.
To get placed in the session cache the same statement has to be
parsed 3 times within the same cursor - a pointer to the shared
cursor is then added to your session cache. If all session cache
cursors are in use then the least recently used entry is discarded.
If you do not have this parameter set already then it is advisable
to set it to a starting value of about 50. The statistics section
of the bstat/estat report includes a value for 'session cursor
cache hits' which shows if the cursor cache is giving any benefit.
The size of the cursor cache can then be increased or decreased as
necessary. SESSION_CACHED_CURSORS are particularly useful with
Oracle Forms applications when forms are frequently opened and
closed. -- SHARED_POOL_RESERVED_SIZE parameter There are quite a
few notes explaining already in circulation. The parameter was
introduced in Oracle 7.1.5 and provides a means of reserving a
portion of the shared pool for large memory allocations. The
reserved area comes out of the shared pool itself. From a practical
point of view one should set SHARED_POOL_RESERVED_SIZE to about 10%
of SHARED_POOL_SIZE unless either the shared pool is very large OR
SHARED_POOL_RESERVED_MIN_ALLOC has been set lower than the default
value: If the shared pool is very large then 10% may waste a
significant amount of memory when a few Mb will suffice. If
SHARED_POOL_RESERVED_MIN_ALLOC has been lowered then many space
requests may be eligible to be satisfied from this portion of the
shared pool and so 10% may be too little. It is easy to monitor the
space usage of the reserved area using the which has a column
FREE_SPACE. -- SHARED_POOL_RESERVED_MIN_ALLOC parameter In Oracle8i
this parameter is hidden. SHARED_POOL_RESERVED_MIN_ALLOC should
generally be left at its default value, although in certain cases
values of 4100 or 4200 may help relieve some contention on a
heavily loaded shared pool. -- SHARED_POOL_SIZE parameter controls
the size of the shared pool itself. The size of the shared pool can
impact performance. If it is too small then it is likely that
sharable information will be flushed from the pool and then later
need to be reloaded (rebuilt). If there is heavy use of literal SQL
and the shared pool is too large then over time a lot of small
chunks of memory can build up on the internal memory freelists
causing the shared pool latch to be held for longer which in-turn
can impact performance. In this situation a smaller shared pool may
perform better than a larger one. This problem is greatly reduced
in 8.0.6 and in 8.1.6 onwards due to the enhancement in . NB: The
shared pool itself should never be made so large that paging or
swapping occur as performance can then decrease by many orders of
magnitude.
-- _SQLEXEC_PROGRESSION_COST parameter (8.1.5 onwards) This is a
hidden parameter which was introduced in Oracle 8.1.5. The
parameter is included here as the default setting has caused some
problems with SQL sharability. Setting this parameter to 0 can
avoid these issues which result in multiple versions statements in
the shared pool. Eg: Add the following to the init.ora file #
_SQLEXEC_PROGRESSION_COST is set to ZERO to avoid SQL sharing
issues # See Note:62143.1 for details _sqlexec_progression_cost=0
Note that a side effect of setting this to '0' is that the
V$SESSION_LONGOPS view is not populated by long running queries. --
MTS, Shared Server and XA The multi-threaded server (MTS) adds to
the load on the shared pool and can contribute to any problems as
the User Global Area (UGA) resides in the shared pool. This is also
true of XA sessions in Oracle7 as their UGA is located in the
shared pool. (In Oracle8/8i XA sessions do NOT put their UGA in the
shared pool). In Oracle8 the Large Pool can be used for MTS
reducing its impact on shared pool activity - However memory
allocations in the Large Pool still make use of the "shared pool
latch". See for a description of the Large Pool. Using dedicated
connections rather than MTS causes the UGA to be allocated out of
process private memory rather than the shared pool. Private memory
allocations do not use the "shared pool latch" and so a switch from
MTS to dedicated connections can help reduce contention in some
cases. In Oracle9i, MTS was renamed to "Shared Server". For the
purposes of the shared pool, the behaviour is essentially the same.
Useful SQL for looking at memory and Shared Pool problems
--------------------------------------------------------Indeling
SGA: ------------SELECT * FROM V$SGA; free memory shared pool:
-----------------------SELECT * FROM v$sgastat WHERE name = 'free
memory'; hit ratio shared pool: ---------------------SELECT
gethits,gets,gethitratio FROM v$librarycache WHERE namespace = 'SQL
AREA'; SELECT SUM(PINS) "EXECUTIONS", SUM(RELOADS) "CACHE MISSES
WHILE EXECUTING"
FROM V$LIBRARYCACHE; SELECT sum(sharable_mem) FROM
v$db_object_cache; statistics: ----------SELECT class, value, name
FROM v$sysstat; Executions: ----------SELECT substr(sql_text,1,90)
"SQL", count(*) , sum(executions) "TotExecs" FROM v$sqlarea WHERE
executions > 5 GROUP BY substr(sql_text,1,90) HAVING count(*)
> 10 ORDER BY 2 ; The values 40,5 and 30 are example values so
this query is looking for different statements whose first 40
characters are the same which have only been executed a few times
each and there are at least 30 different occurrances in the shared
pool. This query uses the idea it is common for literal statements
to begin "SELECT col1,col2,col3 FROM table WHERE ..." with the
leading portion of each statement being the same. V$SQLAREA:
SQL_TEXT VARCHAR2(1000) First thousand characters of the SQL text
for the current cursor SHARABLE_MEM NUMBER Amount of shared memory
used by a cursor. If multiple child cursors exist, then the sum of
all shared memory used by all child cursors. PERSISTENT_MEM NUMBER
Fixed amount of memory used for the lifetime of an open cursor. If
multiple child cursors exist, the fixed sum of memory used for the
lifetime of all the child cursors. RUNTIME_MEM NUMBER Fixed amount
of memory required during execution of a cursor. If multiple child
cursors exist, the fixed sum of all memory required during
execution of all the child cursors.
SORTS NUMBER Sum of the number of sorts that were done for all
the child cursors VERSION_COUNT NUMBER Number of child cursors that
are present in the cache under this parent LOADED_VERSIONS NUMBER
Number of child cursors that are present in the cache and have
their context heap (KGL heap 6) loaded OPEN_VERSIONS NUMBER The
number of child cursors that are currently open under this current
parent USERS_OPENING NUMBER The number of users that have any of
the child cursors open FETCHES NUMBER Number of fetches associated
with the SQL statement EXECUTIONS NUMBER Total number of
executions, totalled over all the child cursors USERS_EXECUTING
NUMBER Total number of users executing the statement over all child
cursors LOADS NUMBER The number of times the object was loaded or
reloaded FIRST_LOAD_TIME VARCHAR2(19) Timestamp of the parent
creation time INVALIDATIONS NUMBER Total number of invalidations
over all the child cursors PARSE_CALLS NUMBER The sum of all parse
calls to all the child cursors under this parent DISK_READS NUMBER
The sum of the number of disk reads over all child cursors
BUFFER_GETS NUMBER The sum of buffer gets over all child
cursors
ROWS_PROCESSED NUMBER The total number of rows processed on
behalf of this SQL statement COMMAND_TYPE NUMBER The Oracle command
type definition OPTIMIZER_MODE VARCHAR2(10) Mode under which the
SQL statement is executed PARSING_USER_ID NUMBER The user ID of the
user that has parsed the very first cursor under this parent
PARSING_SCHEMA_ID NUMBER The schema ID that was used to parse this
child cursor KEPT_VERSIONS NUMBER The number of child cursors that
have been marked to be kept using the DBMS_SHARED_POOL package
ADDRESS RAW(4) The address of the handle to the parent for this
cursor HASH_VALUE NUMBER The hash value of the parent statement in
the library cache MODULE VARCHAR2(64) Contains the name of the
module that was executing at the time that the SQL statement was
first parsed as set by calling DBMS_APPLICATION_INFO.SET_MODULE
MODULE_HASH NUMBER The hash value of the module that is named in
the MODULE column ACTION VARCHAR2(64) Contains the name of the
action that was executing at the time that the SQL statement was
first parsed as set by calling DBMS_APPLICATION_INFO.SET_ACTION
ACTION_HASH NUMBER The hash value of the action that is named in
the ACTION column SERIALIZABLE_ABORTS NUMBER Number of times the
transaction fails to serialize, producing ORA-08177 errors,
totalled over all the child cursors
IS_OBSOLETE VARCHAR2(1) Indicates whether the cursor has become
obsolete (Y) or not (N). This can happen if the number of child
cursors is too large. CHILD_LATCH NUMBER Child latch number that is
protecting the cursor V$SQL: -----V$SQL lists statistics on shared
SQL area without the GROUP BY clause and contains one row for each
child of the original SQL text entered. Column Datatype Description
SQL_TEXT VARCHAR2(1000) First thousand characters of the SQL text
for the current cursor SHARABLE_MEM NUMBER Amount of shared memory
used by this child cursor (in bytes) PERSISTENT_MEM NUMBER Fixed
amount of memory used for the lifetime of this child cursor (in
bytes) RUNTIME_MEM NUMBER Fixed amount of memory required during
the execution of this child cursor SORTS NUMBER Number of sorts
that were done for this child cursor LOADED_VERSIONS NUMBER
Indicates whether the context heap is loaded (1) or not (0)
OPEN_VERSIONS NUMBER Indicates whether the child cursor is locked
(1) or not (0) USERS_OPENING NUMBER Number of users executing the
statement FETCHES NUMBER Number of fetches associated with the SQL
statement EXECUTIONS NUMBER Number of executions that took place on
this object since it was brought into the
library cache USERS_EXECUTING NUMBER Number of users executing
the statement LOADS NUMBER Number of times the object was either
loaded or reloaded FIRST_LOAD_TIME VARCHAR2(19) Timestamp of the
parent creation time INVALIDATIONS NUMBER Number of times this
child cursor has been invalidated PARSE_CALLS NUMBER Number of
parse calls for this child cursor DISK_READS NUMBER Number of disk
reads for this child cursor BUFFER_GETS NUMBER Number of buffer
gets for this child cursor ROWS_PROCESSED NUMBER Total number of
rows the parsed SQL statement returns COMMAND_TYPE NUMBER Oracle
command type definition OPTIMIZER_MODE VARCHAR2(10) Mode under
which the SQL statement is executed OPTIMIZER_COST NUMBER Cost of
this query given by the optimizer PARSING_USER_ID NUMBER User ID of
the user who originally built this child cursor PARSING_SCHEMA_ID
NUMBER Schema ID that was used to originally build this child
cursor KEPT_VERSIONS NUMBER Indicates whether this child cursor has
been marked to be kept pinned in the cache using the
DBMS_SHARED_POOL package
ADDRESS RAW(4) Address of the handle to the parent for this
cursor TYPE_CHK_HEAP RAW(4) Descriptor of the type check heap for
this child cursor HASH_VALUE NUMBER Hash value of the parent
statement in the library cache PLAN_HASH_VALUE NUMBER Numerical
representation of the SQL plan for this cursor. Comparing one
PLAN_HASH_VALUE to another easily identifies whether or not two
plans are the same (rather than comparing the two plans line by
line). CHILD_NUMBER NUMBER Number of this child cursor MODULE
VARCHAR2(64) Contains the name of the module that was executing at
the time that the SQL statement was first parsed, which is set by
calling DBMS_APPLICATION_INFO.SET_MODULE MODULE_HASH NUMBER Hash
value of the module listed in the MODULE column ACTION VARCHAR2(64)
Contains the name of the action that was executing at the time that
the SQL statement was first parsed, which is set by calling
DBMS_APPLICATION_INFO.SET_ACTION ACTION_HASH NUMBER Hash value of
the action listed in the ACTION column SERIALIZABLE_ABORTS NUMBER
Number of times the transaction fails to serialize, producing
ORA-08177 errors, per cursor OUTLINE_CATEGORY VARCHAR2(64) If an
outline was applied during construction of the cursor, then this
column displays the category of that outline. Otherwise the column
is left blank. CPU_TIME NUMBER CPU time (in microseconds) used by
this cursor for parsing/executing/fetching
ELAPSED_TIME NUMBER Elapsed time (in microseconds) used by this
cursor for parsing/executing/fetching OUTLINE_SID NUMBER Outline
session identifier CHILD_ADDRESS RAW(4) Address of the child cursor
SQLTYPE NUMBER Denotes the version of the SQL language used for
this statement REMOTE VARCHAR2(1) (Y/N) Identifies whether the
cursor is remote mapped or not OBJECT_STATUS VARCHAR2(19) Status of
the cursor (VALID/INVALID) LITERAL_HASH_VALUE NUMBER Hash value of
the literals which are replaced with system-generated bind
variables and are to be matched, when CURSOR_SHARING is used. This
is not the hash value for the SQL statement. If CURSOR_SHARING is
not used, then the value is 0. LAST_LOAD_TIME VARCHAR2(19)
IS_OBSOLETE VARCHAR2(1) Indicates whether the cursor has become
obsolete (Y) or not (N). This can happen if the number of child
cursors is too large. CHILD_LATCH NUMBER Child latch number that is
protecting the cursor
Checking for high version counts:
-------------------------------SELECT address, hash_value,
version_count , users_opening , users_executing,
substr(sql_text,1,40) "SQL" FROM v$sqlarea
WHERE version_count > 10 ; "Versions" of a statement occur
where the SQL is character for character identical but the
underlying objects or binds etc.. are different. Finding
statement/s which use lots of shared pool memory:
-------------------------------------------------------SELECT
substr(sql_text,1,60) "Stmt", count(*), sum(sharable_mem) "Mem",
sum(users_opening) "Open", sum(executions) "Exec" FROM v$sql GROUP
BY substr(sql_text,1,60) HAVING sum(sharable_mem) > 20000 ;
SELECT substr(sql_text,1,100) "Stmt", count(*), sum(sharable_mem)
"Mem", sum(users_opening) "Open", sum(executions) "Exec" FROM v$sql
GROUP BY substr(sql_text,1,60) HAVING sum(executions) > 200 ;
SELECT substr(sql_text,1,100) "Stmt", count(*), sum(executions)
"Exec" FROM v$sql GROUP BY substr(sql_text,1,100) HAVING
sum(executions) > 200 ; where MEMSIZE is about 10% of the shared
pool size in bytes. This should show if there are similar literal
statements, or multiple versions of a statements which account for
a large portion of the memory in the shared pool.
1.2 statistics: --------------- Rule based / Cost based - apply
EXPLAIN PLAN in query - ANALYZE COMMAND: ANALYZE TABLE EMPLOYEE
COMPUTE STATISTICS; ANALYZE TABLE EMPLOYEE COMPUTE STATISTICS FOR
ALL INDEXES; ANALYZE INDEX scott.indx1 COMPUTE STATISTICS; ANALYZE
TABLE EMPLOYEE ESTIMATE STATISTICS SAMPLE 10 PERCENT; ALTER TABLE
EMPLOYEE DELETE STATISTICS; - DBMS_UTILITY.ANALYZE_SCHEMA()
procedure: DBMS_UTILITY.ANALYZE_SCHEMA (
schema method estimate_rows estimate_percent method_opt
VARCHAR2, VARCHAR2, NUMBER DEFAULT NULL, NUMBER DEFAULT NULL,
VARCHAR2 DEFAULT NULL);
DBMS_UTILITY.ANALYZE_DATABASE ( method VARCHAR2, estimate_rows
NUMBER DEFAULT NULL, estimate_percent NUMBER DEFAULT NULL,
method_opt VARCHAR2 DEFAULT NULL); method=compute, estimate, delete
To exexcute: exec
DBMS_UTILITY.ANALYZE_SCHEMA('CISADM','COMPUTE');
1.3 Storage parameters: ----------------------segement: pctfree,
pctused, number AND size of extends in STORAGE clause - very low
updates - if updates, oltp - if only inserts : pctfree low :
pctfree 10, pctused 40 : pctfree low
1.4 rebuild indexes on regular basis:
----------------------------------------alter index
SCOTT.EMPNO_INDEX rebuild tablespace INDEX storage (initial 5M next
5M pctincrease 0); You should next use the ANALYZE TABLE COMPUTE
STATISTICS command 1.5 Is an index used in a query?:
--------------------------------De WHERE clause of a query must use
the 'leading column' of (one of the) index(es): Suppose an index
'indx1' exists on EMPLOYEE(city, state, zip) Suppose a user issues
the query: SELECT .. FROM EMPLOYEE WHERE state='NY' Then this query
will not use that index! Therfore you must pay attention to the
cardinal column of any index. 1.6 set transaction parameters:
------------------------------ONLY ORACLE 7,8,8i:
Suppose you must perform an action which will generate a lot of
redo and rollback. If you want to influence which rollback segment
will be used in your transactions, you can use the statement set
transaction use rollback segment SEGMENT_NAME 1.7 Reduce
fragmentation of a dictionary managed tablespace:
-----------------------------------------------------------alter
tablespace DATA coalesce;
1.8 normalisation of tables: ---------------------------The more
tables are 'normalized', the higher the performance costs for
queries joining tables 1.9 commits na zoveel rows:
---------------------------declare i number := 0; cursor s1 is
SELECT * FROM tab1 WHERE col1 = 'value1' FOR UPDATE; begin for c1
in s1 loop update tab1 set col1 = 'value2' WHERE current of s1; i
:= i + 1; if i > 1000 then commit; i := 0; end if; end loop;
commit; end; / -- -----------------------------CREATE TABLE TEST (
ID NUMBER(10) DATUM DATE NAME VARCHAR2(10) ); declare i number :=
1000; begin -- Commit after every X records
NULL, NULL, NULL
while i>1 loop insert into TEST values (1, sysdate+i,'joop');
i := i - 1; commit; end loop; commit; end; / --
-----------------------------CREATE TABLE TEST2 ( i number ID
NUMBER(10) DATUM DATE DAG VARCHAR2(10) NAME VARCHAR2(10) );
NULL, NULL, NULL, NULL, NULL
declare i number := 1; j date; k varchar2(10); begin while i
desc v$sess_io Name Null? ----------------------------- -------SID
BLOCK_GETS CONSISTENT_GETS PHYSICAL_READS BLOCK_CHANGES
CONSISTENT_CHANGES SQL> desc v$session; Name Null?
----------------------------- -------SADDR SID SERIAL# AUDSID PADDR
USER# USERNAME COMMAND OWNERID TADDR LOCKWAIT STATUS SERVER SCHEMA#
SCHEMANAME OSUSER PROCESS MACHINE TERMINAL PROGRAM TYPE SQL_ADDRESS
SQL_HASH_VALUE SQL_ID SQL_CHILD_NUMBER PREV_SQL_ADDR
PREV_HASH_VALUE PREV_SQL_ID PREV_CHILD_NUMBER PLSQL_ENTRY_OBJECT_ID
PLSQL_ENTRY_SUBPROGRAM_ID PLSQL_OBJECT_ID PLSQL_SUBPROGRAM_ID
MODULE MODULE_HASH
Type -------------------NUMBER NUMBER NUMBER NUMBER NUMBER
NUMBER
Type -------------------RAW(8) NUMBER NUMBER NUMBER RAW(8)
NUMBER VARCHAR2(30) NUMBER NUMBER VARCHAR2(16) VARCHAR2(16)
VARCHAR2(8) VARCHAR2(9) NUMBER VARCHAR2(30) VARCHAR2(30)
VARCHAR2(12) VARCHAR2(64) VARCHAR2(30) VARCHAR2(48) VARCHAR2(10)
RAW(8) NUMBER VARCHAR2(13) NUMBER RAW(8) NUMBER VARCHAR2(13) NUMBER
NUMBER NUMBER NUMBER NUMBER VARCHAR2(48) NUMBER
ACTION ACTION_HASH CLIENT_INFO FIXED_TABLE_SEQUENCE
ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
LOGON_TIME LAST_CALL_ET PDML_ENABLED FAILOVER_TYPE FAILOVER_METHOD
FAILED_OVER RESOURCE_CONSUMER_GROUP PDML_STATUS PDDL_STATUS
PQ_STATUS CURRENT_QUEUE_DURATION CLIENT_IDENTIFIER
BLOCKING_SESSION_STATUS BLOCKING_INSTANCE BLOCKING_SESSION SEQ#
EVENT# EVENT P1TEXT P1 P1RAW P2TEXT P2 P2RAW P3TEXT P3 P3RAW
WAIT_CLASS_ID WAIT_CLASS# WAIT_CLASS WAIT_TIME SECONDS_IN_WAIT
STATE SERVICE_NAME SQL_TRACE SQL_TRACE_WAITS SQL_TRACE_BINDS
SQL>
VARCHAR2(32) NUMBER VARCHAR2(64) NUMBER NUMBER NUMBER NUMBER
NUMBER DATE NUMBER VARCHAR2(3) VARCHAR2(13) VARCHAR2(10)
VARCHAR2(3) VARCHAR2(32) VARCHAR2(8) VARCHAR2(8) VARCHAR2(8) NUMBER
VARCHAR2(64) VARCHAR2(11) NUMBER NUMBER NUMBER NUMBER VARCHAR2(64)
VARCHAR2(64) NUMBER RAW(8) VARCHAR2(64) NUMBER RAW(8) VARCHAR2(64)
NUMBER RAW(8) NUMBER NUMBER VARCHAR2(64) NUMBER NUMBER VARCHAR2(19)
VARCHAR2(64) VARCHAR2(8) VARCHAR2(5) VARCHAR2(5)
======================================================== 4. IMP
and EXP, IMPDP and EXPDP, and SQL*Loader Examples
======================================================== 4.1 EXPDP
and IMPDP examples: =============================
New for Oracle 10g, are the impdp and expdp utilities. EXPDP
practice/practice PARFILE=par1.par EXPDP hr/hr
DUMPFILE=export_dir:hr_schema.dmp
LOGFILE=export_dir:hr_schema.explog EXPDP system/********
PARFILE=c:\rmancmd\dpe_1.expctl Oracle 10g provides two new views,
DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS that allow the DBA to
monitor the progress of all DataPump operations. SELECT owner_name
,job_name ,operation ,job_mode ,state ,degree ,attached_sessions
FROM dba_datapump_jobs ; SELECT DPS.owner_name ,DPS.job_name
,S.osuser FROM dba_datapump_sessions DPS ,v$session S WHERE S.saddr
= DPS.saddr ; Example 1. EXPDP parfile
-----------------------JOB_NAME=NightlyDRExport
DIRECTORY=export_dir DUMPFILE=export_dir:fulldb_%U.dmp
LOGFILE=export_dir:NightlyDRExport.explog FULL=Y PARALLEL=2
FILESIZE=650M CONTENT=ALL STATUS=30 ESTIMATE_ONLY=Y Example 2.
EXPDP parfile, only for getting an estimate of export size
--------------------------------------------------------------JOB_NAME=EstimateOnly
DIRECTORY=export_dir LOGFILE=export_dir:EstimateOnly.explog FULL=Y
CONTENT=DATA_ONLY ESTIMATE=STATISTICS ESTIMATE_ONLY=Y STATUS=60
Example 3. EXPDP parfile, only 1 schema, writing to multiple
files with %U variable, limited to 650M
--------------------------------------------------------------------------------------------JOB_NAME=SH_TABLESONLY
DIRECTORY=export_dir DUMPFILE=export_dir:SHONLY_%U.dmp
LOGFILE=export_dir:SH_TablesOnly.explog SCHEMAS=SH PARALLEL=2
FILESIZE=650M STATUS=60 Example 4. EXPDP parfile, multiple tables,
writing to multiple files with %U variable, limited
--------------------------------------------------------------------------------------JOB_NAME=HR_PAYROLL_REFRESH
DIRECTORY=export_dir DUMPFILE=export_dir:HR_PAYROLL_REFRESH_%U.dmp
LOGFILE=export_dir:HR_PAYROLL_REFRESH.explog STATUS=20
FILESIZE=132K CONTENT=ALL
TABLES=HR.EMPLOYEES,HR.DEPARTMENTS,HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROLL_
SALARY,HR.PAYROLL_TRANSACTIONS Example 5. EXPDP parfile, Exports
all objects in the HR schema, including metadata, asof just before
midnight on April 10, 2005
-----------------------------------------------------------------------------------------------------------------------JOB_NAME=HREXPORT
DIRECTORY=export_dir DUMPFILE=export_dir:HREXPORT_%U.dmp
LOGFILE=export_dir:2005-04-10_HRExport.explog SCHEMAS=HR
CONTENTS=ALL FLASHBACK_TIME=TO_TIMESTAMP"('04-10-2005 23:59',
'MM-DD-YYYY HH24:MI')" Example 6. IMPDP parfile, Imports data
+only+ into selected tables in the HR schema, Multiple dump files
will be used
--------------------------------------------------------------------------------------------------------------------JOB_NAME=HR_PAYROLL_IMPORT
DIRECTORY=export_dir DUMPFILE=export_dir:HR_PAYROLL_REFRESH_%U.dmp
LOGFILE=export_dir:HR_PAYROLL_IMPORT.implog STATUS=20
TABLES=HR.PAYROLL_CHECKS,HR.PAYROLL_HOURLY,HR.PAYROLL_SALARY,HR.PAYROLL_TRANSACTIO
NS CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=TRUNCATE
Example 7. IMPDP parfile,3 tables in the SH schema are the only
tables to be refreshed,These tables will be truncated before
loading
------------------------------------------------------------------------------------------------------------------------------DIRECTORY=export_dir
JOB_NAME=RefreshSHTables DUMPFILE=export_dir:fulldb_%U.dmp
LOGFILE=export_dir:RefreshSHTables.implog STATUS=30
CONTENT=DATA_ONLY SCHEMAS=SH
INCLUDE=TABLE:"IN('COUNTRIES','CUSTOMERS','PRODUCTS','SALES')"
TABLE_EXISTS_ACTION=TRUNCATE Example IMPDP parfile,Generates
SQLFILE output showing the DDL statements,Note that this code is
+not+ executed!
--------------------------------------------------------------------------------------------------------------DIRECTORY=export_dir
JOB_NAME=GenerateImportDDL
DUMPFILE=export_dir:hr_payroll_refresh_%U.dmp
LOGFILE=export_dir:GenerateImportDDL.implog
SQLFILE=export_dir:GenerateImportDDL.sql INCLUDE=TABLE Example:
schedule a procedure which uses DBMS_DATAPUMP
-----------------------------------------------------BEGIN
DBMS_SCHEDULER.CREATE_JOB ( job_name => 'HR_EXPORT' ,job_type
=> 'PLSQL_BLOCK' ,job_action => 'BEGIN HR.SP_EXPORT;END;'
,start_date => '04/18/2005 23:00:00.000000' ,repeat_interval
=> 'FREQ=DAILY' ,enabled => TRUE ,comments => 'Performs HR
Schema Export nightly at 11 PM' );
END; / ====================================== How to use the
NETWORK_LINK paramater: ====================================== Note
1: ======= Lora, the DBA at Acme Bank, is at the center of
attention in a high-profile meeting of the bank's top management
team. The objective is to identify ways of enabling end users to
slice and dice the data in the company's main data warehouse. At
the meeting, one idea presented is to create several small data
martseach based on a particular functional areathat
can each be used by specialized teams. To effectively implement
the data mart approach, the data specialists must get data into the
data marts quickly and efficiently. The challenge the team faces is
figuring out how to quickly refresh the warehouse data to the data
marts, which run on heterogeneous platforms. And that's why Lora is
at the meeting. What options does she propose for moving the data?
An experienced and knowledgeable DBA, Lora provides the meeting
attendees with three possibilities, as follows: Using transportable
tablespaces Using Data Pump (Export and Import) Pulling tablespaces
This article shows Lora's explanation of these options, including
their implementation details and their pros and cons. Transportable
Tablespaces: Lora starts by describing the transportable
tablespaces option. The quickest way to transport an entire
tablespace to a target system is to simply transfer the
tablespace's underlying files, using FTP (file transfer protocol)
or rcp (remote copy). However, just copying the Oracle data files
is not sufficient; the target database must recognize and import
the files and the corresponding tablespace before the tablespace
data can become available to end users. Using transportable
tablespaces involves copying the tablespace files and making the
data available in the target database. A few checks are necessary
before this option can be considered. First, for a tablespace TS1
to be transported to a target system, it must be self-contained.
That is, all the indexes, partitions, and other dependent segments
of the tables in the tablespace must be inside the tablespace. Lora
explains that if a set of tablespaces contains all the dependent
segments, the set is considered to be self-contained. For instance,
if tablespaces TS1 and TS2 are to be transferred as a set and a
table in TS1 has an index in TS2, the tablespace set is
self-contained. However, if another index of a table in TS1 is in
tablespace TS3, the tablespace set (TS1, TS2) is not
self-contained. To transport the tablespaces, Lora proposes using
the Data Pump Export utility in Oracle Database 10g. Data Pump is
Oracle's next-generation data transfer tool, which replaces the
earlier Oracle Export (EXP) and Import (IMP) tools. Unlike those
older tools, which use regular SQL to extract and insert data, Data
Pump uses proprietary APIs that bypass the SQL buffer, making the
process extremely fast. In addition, Data Pump can extract specific
objects, such as a particular
stored procedure or a set of tables from a particular
tablespace. Data Pump Export and Import are controlled by jobs,
which the DBA can pause, restart, and stop at will. Lora has run a
test before the meeting to see if Data Pump can handle Acme's
requirements. Lora's test transports the TS1 and TS2 tablespaces as
follows: 1. Check that the set of TS1 and TS2 tablespaces is self-
contained. Issue the following command: BEGIN
SYS.DBMS_TTS.TRANSPORT_SET_CHECK ('TS1','TS2'); END;
2. Identify any nontransportable sets. If no rows are selected,
the tablespaces are self-contained: SELECT * FROM
SYS.TRANSPORT_SET_VIOLATIONS; no rows selected 3. Ensure the
tablespaces are read-only: SELECT STATUS FROM DBA_TABLESPACES WHERE
TABLESPACE_NAME IN ('TS1','TS2'); STATUS --------READ ONLY READ
ONLY 4. Transfer the data files of each tablespace to the remote
system, into the directory /u01/oradata, using a transfer mechanism
such as FTP or rcp. 5. In the target database, create a database
link to the source database (named srcdb in the line below). CREATE
DATABASE LINK srcdb USING 'srcdb'; 6. In the target database,
import the tablespaces into the database, using Data Pump Import.
impdp lora/lora123
TRANSPORT_DATAFILES="'/u01/oradata/ts1_1.dbf','/u01/oradata/ts2_1.dbf'"
NETWORK_LINK='srcdb'
TRANSPORT_TABLESPACES=\(TS1,TS2\) NOLOGFILE=Y This step makes
the TS1 and TS2 tablespaces and their data available in the target
database. Note that Lora doesn't export the metadata from the
source database. She merely specifies the value srcdb, the database
link to the source database, for the parameter NETWORK_LINK in the
impdp command above. Data Pump Import fetches the necessary
metadata from the source across the database link and re-creates it
in the target. 7. Finally, make the TS1 and TS2 tablespaces in the
source database read-write. ALTER TABLESPACE TS1 READ WRITE; ALTER
TABLESPACE TS2 READ WRITE; Note 2: ======= One of the most
significant characteristics of an import operation is its mode,
because the mode largely determines what is imported. The specified
mode applies to the source of the operation, either a dump file set
or another database if the NETWORK_LINK parameter is specified. The
NETWORK_LINK parameter initiates a network import. This means that
the impdp client initiates the import request, typically to the
local database. That server contacts the remote source database
referenced by the database link in the NETWORK_LINK parameter,
retrieves the data, and writes it directly back to the target
database. There are no dump files involved. In the following
example, the source_database_link would be replaced with the name
of a valid database link that must already exist. impdp hr/hr
TABLES=employees DIRECTORY=dpump_dir1
NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT This example
results in an import of the employees table (excluding constraints)
from the source database. The log file is written to dpump_dir1,
specified on the DIRECTORY parameter.
4.2 Export / Import examples: ============================= In
all Oracle versions 7,8,8i,9i,10g you can use the exp and imp
utilities. exp system/manager file=expdat.dmp compress=Y
owner=(HARRY, PIET)
exp system/manager file=hr.dmp owner=HR indexes=Y exp
system/manager file=expdat.dmp TABLES=(john.SALES) imp
system/manager file=hr.dmp full=Y buffer=64000 commit=Y imp
system/manager file=expdat.dmp FROMuser=ted touser=john indexes=N
commit=Y buffer=64000 imp rm_live/rm file=dump.dmp
tables=(employee) imp system/manager file=expdat.dmp FROMuser=ted
touser=john buffer=4194304 c:\> cd [oracle_db_home]\bin c:\>
set nls_lang=american_america.WE8ISO8859P15 # export
NLS_LANG=AMERICAN_AMERICA.UTF8 # export
NLS_LANG=AMERICAN_AMERICA.AL32UTF8 c:\> imp system/manager
fromuser=mis_owner touser=mis_owner file=[yourexport.dmp] FROM
Oracle8i one can use the QUERY= export parameter to SELECTively
unload a subset of the data FROM a table. Look at this example: exp
scott/tiger tables=emp query=\"WHERE deptno=10\" -- Export metadata
only: The Export utility is used to export the metadata describing
the objects contained in the transported tablespace. For our
example scenario, the Export command could be: EXP
TRANSPORT_TABLESPACE=y TABLESPACES=ts_temp_sales FILE=jan_sales.dmp
This operation will generate an export file, jan_sales.dmp. The
export file will be small, because it contains only metadata. In
this case, the export file will contain information describing the
table temp_jan_sales, such as the column names, column datatype,
and all other information that the target Oracle database will need
in order to access the objects in ts_temp_sales.
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$ Extended example: ----------------CASE 1: ======= We create a
user Albert on a 10g DB. This user will create a couple of tables
with referential constraints (PK-FK relations). Then we will export
this user, drop the user, and do an import. See what we have after
the import. -- User: create user albert identified by albert
default tablespace ts_cdc
temporary QUOTA 10M QUOTA 20M QUOTA 50M ;
tablespace temp ON sysaux ON users ON TS_CDC
-- GRANTS: GRANT create session TO albert; GRANT create table TO
albert; GRANT create sequence TO albert; GRANT create procedure TO
albert; GRANT connect TO albert; GRANT resource TO albert; --
connect albert/albert -- create tables create table LOC -- table of
locations ( LOCID int, CITY varchar2(16), constraint pk_loc primary
key (locid) ); create table DEPT -- table of departments ( DEPID
int, DEPTNAME varchar2(16), LOCID int, constraint pk_dept primary
key (depid), constraint fk_dept_loc foreign key (locid) references
loc(locid) ); create table EMP -- table of employees ( EMPID int,
EMPNAME varchar2(16), DEPID int, constraint pk_emp primary key
(empid), constraint fk_emp_dept foreign key (depid) references
dept(depid) ); -- show constraints: SQL> select CONSTRAINT_NAME,
CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME from user_constraints;
CONSTRAINT_NAME
----------------------------------------------------------FK_EMP_DEPT
FK_DEPT_LOC PK_LOC PK_DEPT C TABLE_NAME R_CONSTRAINT_NAME -
-----------------------------R R P P EMP DEPT LOC DEPT PK_DEPT
PK_LOC
PK_EMP -- insert some data: INSERT INSERT INSERT INSERT INSERT
INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT INSERT
INSERT INSERT INTO INTO INTO INTO INTO INTO INTO INTO INTO INTO
INTO INTO INTO INTO INTO INTO LOC LOC LOC LOC VALUES VALUES VALUES
VALUES VALUES VALUES VALUES VALUES VALUES
P EMP
(1,'Amsterdam'); (2,'Haarlem'); (3,null); (4,'Utrecht');
(1,'Sales',1); (2,'PZ',1); (3,'Management',2); (4,'RD',3);
(5,'IT',4); (1,'Joop',1); (2,'Gerrit',2); (3,'Harry',2);
(4,'Christa',3); (5,null,4); (6,'Nina',5); (7,'Nadia',5);
DEPT DEPT DEPT DEPT DEPT EMP EMP EMP EMP EMP EMP EMP
VALUES VALUES VALUES VALUES VALUES VALUES VALUES
-- make an export C:\oracle\expimp>exp '/@test10g2 as sysdba'
file=albert.dat owner=albert Export: Release 10.2.0.1.0 -
Production on Sat Mar 1 08:03:59 2008 Copyright (c) 1982, 2005,
Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release
10.2.0.1.0 Production With the Partitioning, OLAP and Data Mining
options Export done in WE8MSWIN1252 character set and AL16UTF16
NCHAR character set server uses AL32UTF8 character set (possible
charset conversion) About to export specified users ... . exporting
pre-schema procedural objects and actions . exporting foreign
function library names for user ALBERT . exporting PUBLIC type
synonyms . exporting private type synonyms . exporting object type
definitions for user ALBERT About to export ALBERT's objects ... .
exporting database links . exporting sequence numbers . exporting
cluster definitions . about to export ALBERT's tables via
Conventional Path ... . . exporting table DEPT 5 rows exported . .
exporting table EMP 7 rows exported . . exporting table LOC 4 rows
exported . exporting synonyms . exporting views . exporting stored
procedures . exporting operators . exporting referential integrity
constraints
. exporting triggers . exporting indextypes . exporting bitmap,
functional and extensible indexes . exporting posttables actions .
exporting materialized views . exporting snapshot logs . exporting
job queues . exporting refresh groups and children . exporting
dimensions . exporting post-schema procedural objects and actions .
exporting statistics Export terminated successfully without
warnings. C:\oracle\expimp> -- drop user albert SQL>drop user
albert cascade - create user albert See above -- do the import
C:\oracle\expimp>imp '/@test10g2 as sysdba' file=albert.dat
fromuser=albert touser=albert Import: Release 10.2.0.1.0 -
Production on Sat Mar 1 08:09:26 2008 Copyright (c) 1982, 2005,
Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release
10.2.0.1.0 Production With the Partitioning, OLAP and Data Mining
options Export file created by EXPORT:V10.02.01 via conventional
path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR
character set import server uses AL32UTF8 character set (possible
charset conversion) . importing ALBERT's objects into ALBERT . .
importing table "DEPT" 5 rows imported . . importing table "EMP" 7
rows imported . . importing table "LOC" 4 rows imported About to
enable constraints... Import terminated successfully without
warnings. C:\oracle\expimp> - connect albert/albert SQL>
select * from emp; EMPID ---------1 2 EMPNAME DEPID
---------------- ---------Joop 1 Gerrit 2
3 4 5 6 7
Harry Christa Nina Nadia
2 3 4 5 5
7 rows selected. SQL> select * from loc; LOCID ---------1 2 3
4 CITY ---------------Amsterdam Haarlem Utrecht
SQL> select * from dept; DEPID ---------1 2 3 4 5 DEPTNAME
LOCID ---------------- ---------Sales 1 PZ 1 Management 2 RD 3 IT
4
-- show constraints: SQL> select CONSTRAINT_NAME,
CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME from user_constraints;
CONSTRAINT_NAME
----------------------------------------------------------FK_DEPT_LOC
FK_EMP_DEPT PK_DEPT PK_EMP PK_LOC Everything is back again. CASE 2:
======= We are not going to drop the user, but empty the tables:
SQL> SQL> SQL> SQL> SQL> SQL> SQL> alter table
dept disable constraint FK_DEPT_LOC; alter table emp disable
constraint FK_EMP_DEPT; alter table dept disable constraint
PK_DEPT; alter table emp disable constraint pk_emp; alter table loc
disable constraint pk_loc; truncate table emp; truncate table loc;
C TABLE_NAME R_CONSTRAINT_NAME - -----------------------------R R P
P P DEPT EMP DEPT EMP LOC PK_LOC PK_DEPT
SQL> truncate table dept; -- do the import
C:\oracle\expimp>imp '/@test10g2 as sysdba' file=albert.dat
ignore=y fromuser=albert touser=albert Import: Release 10.2.0.1.0 -
Production on Sat Mar 1 08:25:27 2008 Copyright (c) 1982, 2005,
Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release
10.2.0.1.0 Production With the Partitioning, OLAP and Data Mining
options Export file created by EXPORT:V10.02.01 via conventional
path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR
character set import server uses AL32UTF8 character set (possible
charset conversion) . importing ALBERT's objects into ALBERT . .
importing table "DEPT" 5 rows imported . . importing table "EMP" 7
rows imported . . importing table "LOC" 4 rows imported About to
enable constraints... IMP-00017: following statement failed with
ORACLE error 2270: "ALTER TABLE "EMP" ENABLE CONSTRAINT
"FK_EMP_DEPT"" IMP-00003: ORACLE error 2270 encountered ORA-02270:
no matching unique or primary key for this column-list IMP-00017:
following statement failed with ORACLE error 2270: "ALTER TABLE
"DEPT" ENABLE CONSTRAINT "FK_DEPT_LOC"" IMP-00003: ORACLE error
2270 encountered ORA-02270: no matching unique or primary key for
this column-list Import terminated successfully with warnings. So
the data gets imported, but we have a problem with the FOREIGN
KEYS: SQL> select CONSTRAINT_NAME,
CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATUS from
user_constrai nts; CONSTRAINT_NAME STATUS
----------------------------------------------------------FK_DEPT_LOC
DISABLED FK_EMP_DEPT DISABLED PK_LOC DISABLED PK_EMP DISABLED
PK_DEPT DISABLED C TABLE_NAME R_CONSTRAINT_NAME
- ---------------------------------R DEPT PK_LOC R EMP P LOC P
EMP P DEPT PK_DEPT
alter alter alter alter alter alter
table table table table table table
dept enable constraint pk_dept; emp enable constraint pk_emp;
loc enable constraint pk_loc; dept enable constraint FK_DEPT_LOC;
emp enable constraint FK_EMP_DEPT; dept enable constraint
PK_DEPT;
SQL> select CONSTRAINT_NAME,
CONSTRAINT_TYPE,TABLE_NAME,R_CONSTRAINT_NAME, STATUS from
user_constraints; CONSTRAINT_NAME STATUS
----------------------------------------------------------FK_DEPT_LOC
ENABLED FK_EMP_DEPT ENABLED PK_DEPT ENABLED PK_EMP ENABLED PK_LOC
ENABLED SQL> Everything is back again.
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$ C TABLE_NAME R_CONSTRAINT_NAME
- ---------------------------------R DEPT PK_LOC R EMP P DEPT P
EMP P LOC PK_DEPT
What is exported?: -----------------Tables, indexes, data,
database links gets exported. Example: -------exp system/manager
file=oemuser.dmp owner=oemuser Verbonden met: Oracle9i Enterprise
Edition Release 9.0.1.4.0 - Production With the Partitioning option
JServer Release 9.0.1.4.0 - Production. Export is uitgevoerd in
WE8MSWIN1252 tekenset en AL16UTF16 NCHAR-tekenset. Export van
opgegeven gebruikers gaat beginnen ... . pre-schema procedurele
objecten en acties wordt gexporteerd. . bibliotheeknamen van
verwijzende functie voor gebruiker OEMUSER worden gexpo teerd .
objecttypedefinities voor gebruiker OEMUSER worden gexporteerd
Export van objecten van OEMUSER gaat beginnen ...
. databasekoppelingen worden gexporteerd. . volgnummers worden
gexporteerd. . clusterdefinities worden gexporteerd. . export van
tabellen van OEMUSER gaat beginnen ... via conventioneel pad ... .
. tabel CUSTOMERS wordt gexporteerd.Er zijn 2 rijen gexporteerd. .
synoniemen worden gexporteerd. . views worden gexporteerd. .
opgeslagen procedures worden gexporteerd. . operatoren worden
gexporteerd. . referentile integriteitsbeperkingen worden
gexporteerd. . triggers worden gexporteerd. . indextypen worden
gexporteerd. . bitmap, functionele en uit te breiden indexen worden
gexporteerd. . acties post-tabellen worden gexporteerd . snapshots
worden gexporteerd. . logs voor snapshots worden gexporteerd. .
takenwachtrijen worden gexporteerd . herschrijfgroepen en kinderen
worden gexporteerd . dimensies worden gexporteerd. . post-schema
procedurele objecten en acties wordt gexporteerd. . statistieken
worden gexporteerd. Export is succesvol beindigd zonder
waarschuwingen. D:\temp> Can one import tables to a different
tablespace? ------------------------------------------------Import
the dump file using the INDEXFILE= option Edit the indexfile.
Remove remarks and specify the correct tablespaces. Run this
indexfile against your database, this will create the required
tables in the appropriate tablespaces Import the table(s) with the
IGNORE=Y option. Change the default tablespace for the user: Revoke
the "UNLIMITED TABLESPACE" privilege FROM the user Revoke the
user's quota FROM the tablespace FROM WHERE the object was
exported. This forces the import utility to create tables in the
user's default tablespace. Make the tablespace to which you want to
import the default tablespace for the user Import the table Can one
export to multiple files?/ Can one beat the Unix 2 Gig limit?
--------------------------------------------------------------------FROM
Oracle8i, the export utility supports multiple output files. exp
SCOTT/TIGER FILE=D:\F1.dmp,E:\F2.dmp FILESIZE=10m LOG=scott.log Use
the following technique if you use an Oracle version prior to 8i:
Create a compressed export on the fly. # create a named pipe mknod
exp.pipe p
# read the pipe - output to zip file in the background gzip <
exp.pipe > scott.exp.gz & # feed the pipe exp
userid=scott/tiger file=exp.pipe ... Some famous Errors:
------------------Error 1: -------EXP-00008: ORACLE error 6550
encountered ORA-06550: line 1, column 31: PLS-00302: component
'DBMS_EXPORT_EXTENSION' must be declared 1. The errors indicate
that $ORACLE_HOME/rdbms/admin/CATALOG.SQL and
$ORACLE_HOME/rdbms/admin/CATPROC.SQL Should be run again, as has
been previously suggested. Were these scripts run connected as SYS?
Try SELECT OBJECT_NAME, OBJECT_TYPE FROM DBA_OBJECTS WHERE STATUS =
'INVALID' AND OWNER = 'SYS'; Do you have invalid objects? Is
DBMS_EXPORT_EXTENSION invalid? If so, try compiling it manually:
ALTER PACKAGE DBMS_EXPORT_EXTENSION COMPILE BODY; If you receive
errors during manual compilation, please show errors for further
information. 2. Or possibly different imp/exp versions are run to
another version of the database. The problem can be resolved by
copying the higher version CATEXP.SQL and executed in the lesser
version RDBMS. 3. Other fix: If there are problems in exp/imp from
single byte to multibyte databases: - Analyze which tables/rows
could be affected by national characters before running the export
- Increase the size of affected rows. - Export the table data once
again. Error 2: -------EXP-00091: Exporting questionable
statistics. Hi. This warning is generated because the statistics
are questionable due to the client character set difference from
the server character set. There is an article which discusses the
causes of questionable statistics available via the MetaLink
Advanced Search option by Doc ID: Doc ID: 159787.1 9i: Import
STATISTICS=SAFE If you do not want this conversion to occur, you
need to ensure the client NLS environment
performing the export is set to match the server. Fix ~~~~ a) If
the statistics of a table are not required to include in export
take the export with parameter STATISTICS=NONE Example: $exp
scott/tiger file=emp1.dmp tables=emp STATISTICS=NONE b) In case,
the statistics are need to be included can use STATISTICS=ESTIMATE
or COMPUTE (default is Estimate). Error 3: -------EXP-00056:
ORA-01403: EXP-00056: ORA-01403: EXP-00000: ORACLE error 1403
encountered no data found ORACLE error 1403 encountered no data
found Export terminated unsuccessfully
You can't export any DB with an exp utility of a newer version.
The exp version must be equal or older than the DB version Doc ID :
Note:281780.1 Content Type: TEXT/PLAIN Subject: Oracle 9.2.0.4.0:
Schema Export Fails with ORA-1403 (No Data Found) on Exporting
Cluster Definitions Creation Date: 29-AUG-2004 Type: PROBLEM Last
Revision Date: 29-AUG-2004 Status: PUBLISHED The information in
this article applies to: - Oracle Server - Enterprise Edition -
Version: 9.2.0.4 to 9.2.0.4 - Oracle Server - Personal Edition -
Version: 9.2.0.4 to 9.2.0.4 - Oracle Server - Standard Edition -
Version: 9.2.0.4 to 9.2.0.4 This problem can occur on any platform.
ERRORS -----EXP-56 ORACLE error encountered ORA-1403 no data found
EXP-0: Export terminated unsuccessfully SYMPTOMS -------A schema
level export with the 9.2.0.4 export utility from a 9.2.0.4 or
higher release database in which XDB has been installed, fails when
exporting the cluster definitions with: ... . exporting cluster
definitions EXP-00056: ORACLE error 1403 encountered ORA-01403: no
data found EXP-00000: Export terminated unsuccessfully You can
confirm that XDB has been installed in the database:
SQL> SELECT substr(comp_id,1,15) comp_id, status,
substr(version,1,10) version, substr(comp_name,1,30) comp_name FROM
dba_registry ORDER BY 1; COMP_ID --------------... XDB XML XOQ
STATUS VERSION COMP_NAME ----------- ----------
-----------------------------INVALID VALID LOADED 9.2.0.4.0
9.2.0.6.0 9.2.0.4.0 Oracle XML Database Oracle XDK for Java Oracle
OLAP API
You create a trace file of the ORA-1403 error: SQL> SHOW
PARAMETER user_dump SQL> ALTER SYSTEM SET EVENTS '1403 trace
name errorstack level 3'; System altered. -- Re-run the export
SQL> ALTER SYSTEM SET EVENTS '1403 trace name errorstack off';
System altered. The trace file that was written to your
USER_DUMP_DEST directory, shows: ksedmp: internal or fatal error
ORA-01403: no data found Current SQL statement for this session:
SELECT xdb_uid FROM SYS.EXU9XDBUID You can confirm that you have no
invalid XDB objects in the database: SQL> SET lines 200 SQL>
SELECT status, object_id, object_type, owner||'.'||object_name
"OWNER.OBJECT" FROM dba_objects WHERE owner='XDB' AND status !=
'VALID' ORDER BY 4,2; no rows selected Note: If you do have invalid
XDB objects, and the same ORA-1403 error occurs when performing a
full database export, see the solution mentioned in:
[NOTE:255724.1] "Oracle 9i: Full Export Fails with ORA-1403 (No
Data Found) on Exporting Cluster Defintions" CHANGES ------You
recently restored the database from a backup or you recreated the
controlfile, or you performed Operating System actions on your
database tempfiles. CAUSE
----The Temporary tablespace does not have any tempfiles. Note
that the errors are different when exporting with a 9.2.0.3 or
earlier export utility: . exporting cluster definitions EXP-00056:
ORACLE error 1157 encountered ORA-01157: cannot identify/lock data
file 201 - see DBWR trace file ORA-01110: data file 201:
'M:\ORACLE\ORADATA\M9201WA\TEMP01.DBF' ORA-06512: at
"SYS.DBMS_LOB", line 424 ORA-06512: at "SYS.DBMS_METADATA", line
1140 ORA-06512: at line 1 EXP-00000: Export terminated
unsuccessfully The errors are also different when exporting with a
9.2.0.5 or later export utility: . exporting cluster definitions
EXP-00056: ORACLE error 1157 encountered ORA-01157: cannot
identify/lock data file 201 - see DBWR trace file ORA-01110: data
file 201: 'M:\ORACLE\ORADATA\M9205WA\TEMP01.DBF' EXP-00000: Export
terminated unsuccessfully FIX --1. If the controlfile does not have
any reference to the tempfile(s), add the tempfile(s): SQL> SET
lines 200 SQL> SELECT status, enabled, name FROM v$tempfile; no
rows selected SQL> ALTER TABLESPACE temp ADD TEMPFILE
'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' REUSE; or: If the
controlfile has a reference to the tempfile(s), but the files are
missing on disk, re-create the temporary tablespace, e.g.: SQL>
SET lines 200 SQL> CREATE TEMPORARY TABLESPACE temp2 TEMPFILE
'M:\ORACLE\ORADATA\M9204WA\TEMP201.DBF' SIZE 100m AUTOEXTEND ON
NEXT 100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY
TABLESPACE temp2; SQL> DROP TABLESPACE temp; SQL> CREATE
TEMPORARY TABLESPACE temp TEMPFILE
'M:\ORACLE\ORADATA\M9204WA\TEMP01.DBF' SIZE 100m AUTOEXTEND ON NEXT
100M MAXSIZE 2000M; SQL> ALTER DATABASE DEFAULT TEMPORARY
TABLESPACE temp; SQL> SHUTDOWN IMMEDIATE SQL> STARTUP SQL>
DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
2. Now re-run the export. Other errors: ------------Doc ID :
Note:175