Top Banner
Oracle 10g / 11g Database Administration An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. A database server is the key to solving the problems of information management. In general, a server reliably manages a large amount of data in a multiuser environment so that many users can concurrently access the same data. All this is accomplished while delivering high performance. A database server also prevents unauthorized access and provides efficient solutions for failure recovery. Oracle Database is the first database designed for enterprise grid computing, the most flexible and cost effective way to manage information and applications. Enterprise grid computing creates large pools of industry-standard, modular storage and servers. With this architecture, each new system can be rapidly provisioned from the pool of components. There is no need for peak workloads, because capacity can be easily added or reallocated from the resource pools as needed. The database has logical structures and physical structures. Because the physical and logical structures are separate, the physical storage of data can be managed without affecting the access to logical storage structures. Overview of Oracle Grid Architecture The Oracle grid architecture pools large numbers of servers, storage, and networks into a flexible, on-demand computing resource for enterprise computing needs. The grid computing infrastructure continually analyzes demand for resources and adjusts supply accordingly. For example, you could run different applications on a grid of several linked database servers. When reports are due at the end of the month, the database administrator could automatically provision more servers to that application to handle the increased demand.
139
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Oracle 10g DBA

Oracle 10g / 11g Database AdministrationAn Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. A database server is the key to solving the problems of information management. In general, a server reliably manages a large amount of data in a multiuser environment so that many users can concurrently access the same data. All this is accomplished while delivering high performance. A database server also prevents unauthorized access and provides efficient solutions for failure recovery.

Oracle Database is the first database designed for enterprise grid computing, the most flexible and cost effective way to manage information and applications. Enterprise grid computing creates large pools of industry-standard, modular storage and servers. With this architecture, each new system can be rapidly provisioned from the pool of components. There is no need for peak workloads, because capacity can be easily added or reallocated from the resource pools as needed.

The database has logical structures and physical structures. Because the physical and logical structures are separate, the physical storage of data can be managed without affecting the access to logical storage structures.

Overview of Oracle Grid Architecture

The Oracle grid architecture pools large numbers of servers, storage, and networks into a flexible, on-demand computing resource for enterprise computing needs. The grid computing infrastructure continually analyzes demand for resources and adjusts supply accordingly.

For example, you could run different applications on a grid of several linked database servers. When reports are due at the end of the month, the database administrator could automatically provision more servers to that application to handle the increased demand.

Grid computing uses sophisticated workload management that makes it possible for applications to share resources across many servers. Data processing capacity can be added or removed on demand, and resources within a location can be dynamically provisioned. Web services can quickly integrate applications to create new business processes.

Difference between a cluster and a grid

 Clustering is one technology used to create a grid infrastructure. Simple clusters have static resources for specific applications by specific owners. Grids, which can consist of multiple clusters, are dynamic resource pools shareable among many different applications and users. A grid does not assume that all servers in the grid are running the same set of applications. Applications can be scheduled and

Page 2: Oracle 10g DBA

migrated across servers in the grid. Grids share resources from and among independent system owners.

At the highest level, the idea of grid computing is computing as a utility. In other words, you should not care where your data resides, or what computer processes your request. You should be able to request information or computation and have it delivered - as much as you want, and whenever you want. This is analogous to the way electric utilities work, in that you don't know where the generator is, or how the electric grid is wired, you just ask for electricity, and you get it. The goal is to make computing a utility, a commodity, and ubiquitous. Hence the name, The Grid. This view of utility computing is, of course, a "client side" view.

From the "server side", or behind the scenes, the grid is about resource allocation, information sharing, and high availability. Resource allocation ensures that all those that need or request resources are getting what they need, that resources are not standing idle while requests are going unserviced. Information sharing makes sure that the information users and applications need is available where and when it is needed. High availability features guarantee all the data and computation is always there, just like a utility company always provides electric power.

Responsibilities of Database Administrators

Each database requires at least one database administrator (DBA). An Oracle Database system can be large and can have many users. Therefore, database administration is sometimes not a one-person job, but a job for a group of DBAs who share responsibility.

A database administrator's responsibilities can include the following tasks:

Installing and upgrading the Oracle Database server and application tools

Allocating system storage and planning future storage requirements for the database system

Creating primary database storage structures (tablespaces) after application developers have designed an application

Creating primary objects (tables, views, indexes) once application developers have designed an application

Modifying the database structure, as necessary, from information given by application developers

Enrolling users and maintaining system security

Ensuring compliance with Oracle license agreements

Controlling and monitoring user access to the database

Monitoring and optimizing the performance of the database

Page 3: Oracle 10g DBA

Planning for backup and recovery of database information

Maintaining archived data on tape

Backing up and restoring the database

Contacting Oracle for technical support

Creating the Database

This section presents the steps involved when you create a database manually. These steps should be followed in the order presented. Before you create the database make sure you have done the planning about the size of the database, number of tablespaces and redo log files you want in the database. Regarding the size of the database you have to first find out how many tables are going to be created in the database and how much space they will be occupying for the next 1 year or 2. The best thing is to start with some specific size and later on adjust the size depending upon the requirement Plan the layout of the underlying operating system files your database will comprise. Proper distribution of files can improve database performance dramatically by distributing the I/O during file access. You can distribute I/O in several ways when you install Oracle software and create your database. For example, you can place redo log files on separate disks or use striping. You can situate datafiles to reduce contention. And you can control data density (number of rows to a data block).  Select the standard database block size. This is specified at database creation by the DB_BLOCK_SIZE initialization parameter and cannot be changed after the database is created. For databases, block size of 4K or 8K is widely used Before you start creating the Database it is best to write down the specification and then proceed The examples shown in these steps create an example database my_ica_db Let us create a database my_ica_db with the following specification Database name and System Identifier     SID        =    myicadb     DB_NAME    =    myicadb TABLESPACES(we will have 5 tablespaces in this database. With 1 datafile in each tablespace)      Tablespace Name Datafile                                               Size

system     /u01/oracle/oradata/myica/sys.dbf    250M

Page 4: Oracle 10g DBA

users      /u01/oracle/oradata/myica/usr.dbf    100Mundotbs    /u01/oracle/oradata/myica/undo.dbf   100Mtemp       /u01/oracle/oradata/myica/temp.dbf   100Mindex_data /u01/oracle/oradata/myica/indx.dbf   100M

     sysaux     /u01/oracle/oradata/myica/sysaux.dbf 100M LOGFILES(we will have 2 log groups in the database)                       Logfile   Group                           Member                                                                                                                   Size     Group 1    /u01/oracle/oradata/myica/log1.ora   50M     Group 2    /u01/oracle/oradata/myica/log2.ora   50M CONTROL FILE         /u01/oracle/oradata/myica/control.oraPARAMETER FILE       /u01/oracle/dbs/initmyicadb.ora (rememer the parameter file name should of the format init<sid>.ora and it should be in ORACLE_HOME/dbs  directory in Unix o/s and ORACLE_HOME/database directory in windows o/s) Now let us start creating the database. Step 1: Login to oracle account and make directories for  your database.       $mkdir /u01/oracle/oradata/myica      $mkdir /u01/oracle/oradata/myica/bdump      $mkdir /u01/oracle/oradata/myica/udump      $mkdir /u01/oracle/oradata/myica/cdump Step 2: Create the parameter file by copying the default template (init.ora) and

set the required parameters       $cd /u01/oracle/dbs      $cp init.ora  initmyicadb.ora             Now open the parameter file and set the following parameters

 $vi initmyicadb.ora

       DB_NAME=myicadb      DB_BLOCK_SIZE=8192      CONTROL_FILES=/u01/oracle/oradata/myica/control.ora      BACKGROUND_DUMP_DEST=/u01/oracle/oradata/myica/bdump      USER_DUMP_DEST=/u01/oracle/oradata/myica/udump

CORE_DUMP_DEST=/u01/oracle/oradata/myica/cdumpUNDO_TABLESPACE=undotbsUNDO_MANAGEMENT=AUTO

Page 5: Oracle 10g DBA

             After entering the above parameters save the file by pressing Esc :wq Step 3: Now set ORACLE_SID environment variable and start the instance.       $export ORACLE_SID=myicadb      $sqlplus      Enter User: / as sysdba      SQL>startup nomount Step 4: Give the create database command           

         Here I am not specfying optional setting such as language, characterset etc. For these          settings oracle will use the default values. I am giving the barest command to create the

database to keep it simple.             The command to create the database is   SQL>create database myicadb 

datafile ‘/u01/oracle/oradata/myica/sys.dbf’ size 250M sysaux datafile ‘/u01/oracle/oradata/myica/sysaux.dbf’ size 100mundo tablespace undotbs

datafile ‘/u01/oracle/oradata/myica/undo.dbf’ size 100m

      default temporary tablespace temp            tempfile ‘/u01/oracle/oradata/myica/tmp.dbf’ size 100m      logfile            group 1 ‘/u01/oracle/oradata/myica/log1.ora’ size 50m,            group 2 ‘/u01/oracle/oradata/myica/log2.ora’ size 50m;  After the command fishes you will get the following message Database created. If you are getting any errors then see accompanying messages. If no accompanying messages are shown then you have to see the alert_myicadb.log file located in BACKGROUND_DUMP_DESTdirectory, which will show the exact reason why the command has failed. After you have rectified the error please delete all created files in /u01/oracle/oradata/myica directory and again give the above command.  

Page 6: Oracle 10g DBA

Step 5: After the above command finishes, the database will get mounted and opened. Now

create additional tablespaces To create USERS tablespaceSQL>create tablespace users

datafile ‘/u01/oracle/oradata/myica/usr.dbf’ size 100M; To create INDEX_DATA tablespaceSQL>create tablespace index_data

 datafile ‘/u01/oracle/oradata/myica/indx.dbf’ size 100M

  Step 6: To populate the database with data dictionaries and to install procedural options execute            the following scripts             First execute the CATALOG.SQL script to install data dictionaries      SQL>@/u01/oracle/rdbms/admin/catalog.sql           

The above script will take several minutes. After the above script is finished run the CATPROC.SQL script to install procedural option.SQL>@/u01/oracle/rdbms/admin/catproc.sql This script will also take several minutes to complete.

 Step 7: Now change the passwords for SYS and SYSTEM account, since the default passwords            change_on_install and manager are known by everybody.       SQL>alter user sys identified by myica;       SQL>alter user system identified by myica; Step 8: Create Additional user accounts. You can create as many user account as

you like. Let us create the popular account SCOTT.       SQL>create user scott default tablespace users

identified by tiger quota 10M on users;      SQL>grant connect to scott; Step 9: Add this database SID in listener.ora file and restart the listener

process.       $cd /u01/oracle/network/admin           

$vi listener.ora 

Page 7: Oracle 10g DBA

            (This file will already contain sample entries. Copy and paste one sample entry and edit the SID setting)

       LISTENER =

  (DESCRIPTION_LIST =    (DESCRIPTION =

             (ADDRESS =(PROTOCOL = TCP)(HOST=200.200.100.1)(PORT = 1521))    )  )SID_LIST_LISTENER =  (SID_LIST =    (SID_DESC =      (SID_NAME = PLSExtProc)      (ORACLE_HOME =/u01/oracle)      (PROGRAM = extproc)    )    (SID_DESC =       (SID_NAME=ORCL)       (ORACLE_HOME=/u01/oracle)    )#Add these lines    (SID_DESC =       (SID_NAME=myicadb)       (ORACLE_HOME=/u01/oracle)    )   )

              Save the file by pressing Esc :wq             Now restart the listener process.       $lsnrctl stop      $lsnrctl start Step 10: It is recommended to take a full database backup after you just created the

database.             How to take backup is deal in the Backup and Recovery Section.             Congratualtions you have just created an oracle database.

 Managing Tablespaces and Datafiles

Using multiple tablespaces provides several Advantages

Page 8: Oracle 10g DBA

Separate user data from data dictionary data to reduce contention among dictionary objects and schema objects for the same datafiles.

Separate data of one application from the data of another to prevent multiple applications from being affected if a tablespace must be taken offline.

Store different the datafiles of different tablespaces on different disk drives to reduce I/O contention.

Take individual tablespaces offline while others remain online, providing better overall availability.

Creating  New Tablespaces

You can create Locally Managed or Dictionary Managed Tablespaces. In prior versions of Oracle only Dictionary managed Tablespaces were available but from Oracle ver. 8i you can also create Locally managed tablespaces. The advantages of locally managed tablespaces are

Locally managed tablespaces track all extent information in the tablespace itself by using bitmaps, resulting in the following benefits:

Concurrency and speed of space operations is improved, because space allocations and deallocations modify locally managed resources (bitmaps stored in header files) rather than requiring centrally managed resources such as enqueues

Performance is improved, because recursive operations that are sometimes required during dictionary-managed space allocation are eliminated

To create a locally managed tablespace give the following command

SQL> CREATE TABLESPACE ica_lmts DATAFILE '/u02/oracle/ica/ica01.dbf' SIZE 50M

    EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

               

AUTOALLOCATE causes the tablespace to be system managed with a minimum extent size of 64K.

Page 9: Oracle 10g DBA

The alternative to AUTOALLOCATE is UNIFORM. which specifies that the tablespace is managed with extents of uniform size. You can specify that size in the SIZE clause of UNIFORM. If you omit SIZE, then the default size is 1M. The following example creates a Locally managed tablespace with uniform extent size of 256K

SQL> CREATE TABLESPACE ica_lmt DATAFILE '/u02/oracle/ica/ica01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K;

To Create Dictionary Managed Tablespace

SQL> CREATE TABLESPACE ica_lmt DATAFILE '/u02/oracle/ica/ica01.dbf' SIZE 50M EXTENT MANAGEMENT DICTIONARY;

Bigfile Tablespaces (Introduced in Oracle Ver. 10g)

A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks) datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles, but the files cannot be as large. Bigfile tablespaces can reduce the number of datafiles needed for a database.

To create a bigfile tablespace give the following command

SQL> CREATE BIGFILE TABLESPACE ica_bigtbs

    DATAFILE '/u02/oracle/ica/bigtbs01.dbf' SIZE 50G;

 

To Extend the Size of a tablespace

Option 1

 You can extend the size of a tablespace by increasing the size of an existing datafile by typing the following command

Page 10: Oracle 10g DBA

SQL> alter  database ica datafile ‘/u01/oracle/data/icatbs01.dbf’ resize 100M;

This will increase the size from 50M to 100M

Option 2

You can also extend the size of a tablespace by adding a new datafile to a tablespace. This is useful if the size of existing datafile is reached o/s file size limit or the drive where the file is existing does not have free space. To add a new datafile to an existing tablespace give the following command.

SQL> alter tablespace add datafile  ‘/u02/oracle/ica/icatbs02.dbf’ size 50M;

Option 3

You can also use auto extend feature of datafile. In this, Oracle will automatically increase the size of a datafile whenever space is required. You can specify by how much size the file should increase and Maximum size to which it should extend.

To make a existing datafile auto extendable give the following command

SQL> alter database datafile ‘/u01/oracle/ica/icatbs01.dbf’ auto extend ON next 5M maxsize 500M;

You can also make a datafile auto extendable while creating a new tablespace itself by giving the following command.

Page 11: Oracle 10g DBA

SQL> create tablespace ica datafile ‘/u01/oracle/ica/icatbs01.dbf’ size 50M auto extend ON next 5M maxsize 500M;

To decrease the size of a tablespace

You can decrease the size of tablespace by decreasing the datafile associated with it. You decrease a datafile only up to size of empty space in it. To decrease the size of a datafile give the following command

SQL> alter database datafile ‘/u01/oracle/ica/icatbs01.dbf’      resize 30M;

 

Coalescing Tablespaces

A free extent in a dictionary-managed tablespace is made up of a collection of contiguous free blocks. When allocating new extents to a tablespace segment, the database uses the free extent closest in size to the required extent. In some cases, when segments are dropped, their extents are deallocated and marked as free, but adjacent free extents are not immediately recombined into larger free extents. The result is fragmentation that makes allocation of larger extents more difficult.

You should often use the ALTER TABLESPACE ... COALESCE statement to manually coalesce any adjacent free extents. To Coalesce a tablespace give the following command

SQL> alter tablespace ica coalesce;

Page 12: Oracle 10g DBA

Taking tablespaces Offline or Online

You can take an online tablespace offline so that it is temporarily unavailable for general use. The rest of the database remains open and available for users to access data. Conversely, you can bring an offline tablespace online to make the schema objects within the tablespace available to database users. The database must be open to alter the availability of a tablespace.

To alter the availability of a tablespace, use the ALTER TABLESPACE statement. You must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege.

To Take a Tablespace Offline give the following command

SQL>alter tablespace ica offline;

To again bring it back online give the following command.

SQL>alter tablespace ica online;

To take individual datafile offline type the following command

SQL>alter database datafile ‘/u01/oracle/ica/ica_tbs01.dbf’ offline;

Again to bring it back online give the following command

SQL> alter database datafile ‘/u01/oracle/ica/ica_tbs01.dbf’ online;

Page 13: Oracle 10g DBA

Note: You can’t take individual datafiles offline it the database is running in NOARCHIVELOG mode.  If the datafile has become corrupt or missing when the database is running in NOARCHIVELOG mode then you can only drop it by giving the following command

SQL>alter database datafile ‘/u01/oracle/ica/ica_tbs01.dbf’                                offline for drop;

Making a Tablespace Read only.

Making a tablespace read-only prevents write operations on the datafiles in the tablespace. The primary purpose of read-only tablespaces is to eliminate the need to perform backup and recovery of large, static portions of a database. Read-only tablespaces also provide a way to protecting historical data so that users cannot modify it. Making a tablespace read-only prevents updates on all tables in the tablespace, regardless of a user's update privilege level.

To make a tablespace read only

SQL>alter tablespace ica read only

Again to make it read write

SQL>alter tablespace ica read write;

Renaming Tablespaces

Using the RENAME TO clause of the ALTER TABLESPACE, you can rename a permanent or temporary tablespace. For example, the following statement renames the users tablespace:

ALTER TABLESPACE users RENAME TO usersts;

 

Page 14: Oracle 10g DBA

The following affect the operation of this statement:

The COMPATIBLE parameter must be set to 10.0 or higher.

If the tablespace being renamed is the SYSTEM tablespace or the SYSAUX tablespace, then it will not be renamed and an error is raised.

If any datafile in the tablespace is offline, or if the tablespace is offline, then the tablespace is not renamed and an error is raised.

Dropping Tablespaces

You can drop a tablespace and its contents (the segments contained in the tablespace) from the database if the tablespace and its contents are no longer required. You must have theDROP TABLESPACE system privilege to drop a tablespace.

Caution: Once a tablespace has been dropped, the data in the tablespace is not recoverable. Therefore, make sure that all data contained in a tablespace to be dropped will not be required in the future. Also, immediately before and after dropping a tablespace from a database, back up the database completely

To drop a tablespace give the following command.

SQL> drop tablespace ica;

This will drop the tablespace only if it is empty. If it is not empty and if you want to drop it anyhow then add the following keyword

SQL>drop tablespace ica including contents;

This will drop the tablespace even if it is not empty. But the datafiles will not be deleted you have to use operating system command to delete the files.

But If  you include datafiles keyword then, the associated datafiles will also be deleted from the disk.

SQL>drop tablespace ica including contents and datafiles;

Page 15: Oracle 10g DBA

Temporary Tablespace

Temporary tablespace is used for sorting large tables. Every database should have one temporary tablespace. To create temporary tablespace give the following command.

SQL>create temporary tablespace temp tempfile ‘/u01/oracle/data/ica_temp.dbf’ size 100M    extent management local  uniform size 5M;

The extent management clause is optional for temporary tablespaces because all temporary tablespaces are created with locally managed extents of a uniform size.  The AUTOALLOCATEclause is not allowed for temporary tablespaces.

Increasing or Decreasing the size of a Temporary Tablespace

You can use the resize clause to increase or decrease the size of a temporary tablespace. The following statement resizes a temporary file:

SQL>ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' RESIZE 18M;

The following statement drops a temporary file and deletes the operating system file:

SQL> ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP  INCLUDING DATAFILES;

Tablespace Groups

A tablespace group enables a user to consume temporary space from multiple tablespaces. A tablespace group has the following characteristics:

      It contains at least one tablespace. There is no explicit limit on the maximum number of tablespaces that are contained in a group.

Page 16: Oracle 10g DBA

      It shares the namespace of tablespaces, so its name cannot be the same as any tablespace.

      You can specify a tablespace group name wherever a tablespace name would appear when you assign a default temporary tablespace for the database or a temporary tablespace for a user.

You do not explicitly create a tablespace group. Rather, it is created implicitly when you assign the first temporary tablespace to the group. The group is deleted when the last temporary tablespace it contains is removed from it.

Using a tablespace group, rather than a single temporary tablespace, can alleviate problems caused where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many partitions. A tablespace group enables parallel execution servers in a single parallel operation to use multiple temporary tablespaces.

The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member tablespaces.

Creating a Temporary Tablespace Group

You create a tablespace group implicitly when you include the TABLESPACE GROUP clause in the CREATE TEMPORARY TABLESPACE or ALTER TABLESPACE statement and the specified tablespace group does not currently exist.

For example, if neither group1 nor group2 exists, then the following statements create those groups, each of which has only the specified tablespace as a member:

CREATE TEMPORARY TABLESPACE ica_temp2 TEMPFILE '/u02/oracle/ica/ica_temp.dbf'     SIZE 50M     TABLESPACE GROUP group1; ALTER TABLESPACE ica_temp2 TABLESPACE GROUP group2;

 

Page 17: Oracle 10g DBA

Assigning a Tablespace Group as the Default Temporary Tablespace

Use the ALTER DATABASE ...DEFAULT TEMPORARY TABLESPACE statement to assign a tablespace group as the default temporary tablespace for the database. For example:

ALTER DATABASE sample DEFAULT TEMPORARY TABLESPACE group2;

 

Diagnosing and Repairing Locally Managed Tablespace Problems

To diagnose and repair corruptions in Locally Managed Tablespaces Oracle has supplied a package called DBMS_SPACE_ADMIN. This package has many procedures described below:

Procedure Description

SEGMENT_VERIFY Verifies the consistency of the extent map of the segment.

SEGMENT_CORRUPT Marks the segment corrupt or valid so that appropriate error recovery can be done. Cannot be used for a locally managed SYSTEM tablespace.

SEGMENT_DROP_CORRUPT Drops a segment currently marked corrupt (without reclaiming space). Cannot be used for a locally managed SYSTEM tablespace.

SEGMENT_DUMP Dumps the segment header and extent map of a given segment.

TABLESPACE_VERIFY Verifies that the bitmaps and extent maps for the segments in the tablespace are in sync.

TABLESPACE_REBUILD_BITMAPS Rebuilds the appropriate bitmap. Cannot be 

Page 18: Oracle 10g DBA

Procedure Description

used for a locally managed SYSTEM tablespace.

TABLESPACE_FIX_BITMAPS Marks the appropriate data block address range (extent) as free or used in bitmap. Cannot be used for a locally managed SYSTEM tablespace.

TABLESPACE_REBUILD_QUOTAS Rebuilds quotas for given tablespace.

TABLESPACE_MIGRATE_FROM_LOCAL

Migrates a locally managed tablespace to dictionary-managed tablespace. Cannot be used to migrate a locally managed SYSTEM tablespace to a dictionary-managed SYSTEM tablespace.

TABLESPACE_MIGRATE_TO_LOCAL Migrates a tablespace from dictionary-managed format to locally managed format.

TABLESPACE_RELOCATE_BITMAPS Relocates the bitmaps to the destination specified. Cannot be used for a locally managed system tablespace.

TABLESPACE_FIX_SEGMENT_STATES

Fixes the state of the segments in a tablespace in which migration was aborted.

 

Be careful using the above procedures if not used properly you will corrupt your database. Contact Oracle Support before using these procedures.

Following are some of the Scenarios where you can use the above procedures

Page 19: Oracle 10g DBA

Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap)

The TABLESPACE_VERIFY procedure discovers that a segment has allocated blocks that are marked free in the bitmap, but no overlap between segments is reported.

In this scenario, perform the following tasks:

1.         Call the SEGMENT_DUMP procedure to dump the ranges that the administrator allocated to the segment.

2.         For each range, call the TABLESPACE_FIX_BITMAPS procedure with the TABLESPACE_EXTENT_MAKE_USED option to mark the space as used.

3.         Call TABLESPACE_REBUILD_QUOTAS to fix up quotas.

Scenario 2: Dropping a Corrupted Segment

You cannot drop a segment because the bitmap has segment blocks marked "free". The system has automatically marked the segment corrupted.

In this scenario, perform the following tasks:

1.         Call the SEGMENT_VERIFY procedure with the SEGMENT_VERIFY_EXTENTS_GLOBAL option. If no overlaps are reported, then proceed with steps 2 through 5.

2.         Call the SEGMENT_DUMP procedure to dump the DBA ranges allocated to the segment.

3.         For each range, call TABLESPACE_FIX_BITMAPS with the TABLESPACE_EXTENT_MAKE_FREE option to mark the space as free.

4.         Call SEGMENT_DROP_CORRUPT to drop the SEG$ entry.

5.         Call TABLESPACE_REBUILD_QUOTAS to fix up quotas.

Scenario 3: Fixing Bitmap Where Overlap is Reported

The TABLESPACE_VERIFY procedure reports some overlapping. Some of the real data must be sacrificed based on previous internal errors.

Page 20: Oracle 10g DBA

After choosing the object to be sacrificed, in this case say, table t1, perform the following tasks:

1.         Make a list of all objects that t1 overlaps.

2.         Drop table t1. If necessary, follow up by calling the SEGMENT_DROP_CORRUPT procedure.

3.         Call the SEGMENT_VERIFY procedure on all objects that t1 overlapped. If necessary, call the TABLESPACE_FIX_BITMAPS procedure to mark appropriate bitmap blocks as used.

4.         Rerun the TABLESPACE_VERIFY procedure to verify the problem is resolved.

Scenario 4: Correcting Media Corruption of Bitmap Blocks

A set of bitmap blocks has media corruption.

In this scenario, perform the following tasks:

1.         Call the TABLESPACE_REBUILD_BITMAPS procedure, either on all bitmap blocks, or on a single block if only one is corrupt.

2.         Call the TABLESPACE_REBUILD_QUOTAS procedure to rebuild quotas.

3.         Call the TABLESPACE_VERIFY procedure to verify that the bitmaps are consistent.

Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace

To migrate a dictionary-managed tablespace to a locally managed tablespace. You use the TABLESPACE_MIGRATE_TO_LOCAL procedure.

For example if you want to migrate a dictionary managed tablespace ICA2 to Locally managed then give the following command.

EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('ica2');

 

Page 21: Oracle 10g DBA

Transporting Tablespaces

You can use the transportable tablespaces feature to move a subset of an Oracle Database and "plug" it in to another Oracle Database, essentially moving tablespaces between the databases. The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle9i, the transported tablespaces are not required to be of the same block size as the target database standard block size.

Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are simply copied to the destination location, and you use an import utility to transfer only the metadata of the tablespace objects to the new database.

Starting with Oracle Database 10g, you can transport tablespaces across platforms. This functionality can be used to Allow a database to be migrated from one platform to another. However not all platforms are supported. To see which platforms are supported give the following query.

SQL> COLUMN PLATFORM_NAME FORMAT A30

SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;

PLATFORM_ID PLATFORM_NAME                  ENDIAN_FORMAT

----------- ------------------------------ --------------

          1 Solaris[tm] OE (32-bit)        Big

          2 Solaris[tm] OE (64-bit)        Big

          7 Microsoft Windows NT           Little

         10 Linux IA (32-bit)              Little

          6 AIX-Based Systems (64-bit)     Big

Page 22: Oracle 10g DBA

          3 HP-UX (64-bit)                 Big

          5 HP Tru64 UNIX                  Little

          4 HP-UX IA (64-bit)              Big

         11 Linux IA (64-bit)              Little

         15 HP Open VMS                    Little

10 rows selected.

If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.

Important: Before a tablespace can be transported to a different platform, the datafile header must identify the platform to which it belongs. In an Oracle Database with compatibility set to 10.0.0 or higher, you can accomplish this by making the datafile read/write at least once.

SQL> alter tablespace ica read only;

Then,

SQL> alter tablespace ica read write;

Procedure for transporting tablespaces

To move or copy a set of tablespaces, perform the following steps.

1.         For cross-platform transport, check the endian format of both platforms by querying the V$TRANSPORTABLE_PLATFORM view.

If you are transporting the tablespace set to a platform different from the source platform, then determine if the source and target platforms are supported and their endianness. If both platforms have the same endianness, no conversion is necessary. Otherwise you must do a conversion of the tablespace set either at the source or target database.

Page 23: Oracle 10g DBA

Ignore this step if you are transporting your tablespace set to the same platform.

2.         Pick a self-contained set of tablespaces.

3.         Generate a transportable tablespace set.

A transportable tablespace set consists of datafiles for the set of tablespaces being transported and an export file containing structural information for the set of tablespaces.

If you are transporting the tablespace set to a platform with different endianness from the source platform, you must convert the tablespace set to the endianness of the target platform. You can perform a source-side conversion at this step in the procedure, or you can perform a target-side conversion as part of step 4.

4.         Transport the tablespace set.

Copy the datafiles and the export file to the target database. You can do this using any facility for copying flat files (for example, an operating system copy utility, ftp, theDBMS_FILE_COPY package, or publishing on CDs).

If you have transported the tablespace set to a platform with different endianness from the source platform, and you have not performed a source-side conversion to the endianness of the target platform, you should perform a target-side conversion now.

5.         Plug in the tablespace.

Invoke the Export utility to plug the set of tablespaces into the target database.

Transporting Tablespace Example

These steps are illustrated more fully in the example that follows, where it is assumed the following datafiles and tablespaces exist:

Tablespace Datafile:

ica_sales_1 /u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf

Page 24: Oracle 10g DBA

Tablespace Datafile:

ica_sales_2 /u01/oracle/oradata/ica_salesdb/ica_sales_201.dbf

 

Step 1: Determine if Platforms are Supported and Endianness

This step is only necessary if you are transporting the tablespace set to a platform different from the source platform. If ica_sales_1 and ica_sales_2 were being transported to a different platform, you can execute the following query on both platforms to determine if the platforms are supported and their endian formats:

SELECT d.PLATFORM_NAME, ENDIAN_FORMAT     FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d     WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME; 

The following is the query result from the source platform:

PLATFORM_NAME             ENDIAN_FORMAT------------------------- --------------Solaris[tm] OE (32-bit)   Big 

The following is the result from the target platform:

PLATFORM_NAME             ENDIAN_FORMAT------------------------- --------------Microsoft Windows NT      Little 

You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set.

Step 2: Pick a Self-Contained Set of Tablespaces

There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. That is it should not have tables with foreign keys referring to primary key of tables which are in other tablespaces. It should not have tables with some partitions in other

Page 25: Oracle 10g DBA

tablespaces. To find out whether the tablespace is self contained do the following

EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('ica_sales_1,ica_sales_2', TRUE);

 

After executing the above give the following query to see whether  any violations are there.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

 

VIOLATIONS

---------------------------------------------------------------------------

Constraint DEPT_FK between table SAMI.EMP in tablespace ICA_SALES_1 and table

SAMI.DEPT in tablespace OTHER

Partitioned table SAMI.SALES is partially contained in the transportable set

 

These violations must be resolved before ica_sales_1 and ica_sales_2 are transportable

Step 3: Generate a Transportable Tablespace Set

After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions:

Make all tablespaces in the set you are copying read-only.

SQL> ALTER TABLESPACE ica_sales_1 READ ONLY;

Tablespace altered.

SQL> ALTER TABLESPACE ica_sales_2 READ ONLY;

Page 26: Oracle 10g DBA

Tablespace altered.

Invoke the Export utility on the host system and specify which tablespaces are in the transportable set.

SQL> HOST

$ exp system/password FILE=/u01/oracle/expdat.dmp TRANSPORT_TABLESPACES = ica_sales_1,ica_sales_2

If ica_sales_1 and ica_sales_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the tablespace set, then convert the datafiles composing the ica_sales_1 and ica_sales_2 tablespaces. You have to use RMAN utility to convert datafiles

$ RMAN TARGET /

Recovery Manager: Release 10.1.0.0.0 Copyright (c) 1995, 2003, Oracle Corporation.  All rights reserved.

 

connected to target database: ica_salesdb (DBID=3295731590)

Convert the datafiles into a temporary location on the source platform. In this example, assume that the temporary location, directory /temp, has already been created. The converted datafiles are assigned names by the system.

RMAN> CONVERT TABLESPACE ica_sales_1,ica_sales_2         TO PLATFORM 'Microsoft Windows NT' FORMAT '/temp/%U';

 

Starting backup at 08-APR-03using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1channel ORA_DISK_1: sid=11 devtype=DISKchannel ORA_DISK_1: starting datafile conversioninput datafile fno=00005 name=/u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf

Page 27: Oracle 10g DBA

converted datafile=/temp/data_D-10_I-3295731590_TS-ADMIN_TBS_FNO-5_05ek24v5

channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:15channel ORA_DISK_1: starting datafile conversioninput datafile fno=00004 name=/u01/oracle/oradata/ica_salesdb/ica_sales_101.dbfconverted datafile=/temp/data_D-10_I-3295731590_TS-EXAMPLE_FNO-4_06ek24vl

channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45

Finished backup at 08-APR-07

Step 4: Transport the Tablespace Set

Transport both the datafiles and the export file of the tablespaces to a place accessible to the target database. You can use any facility for copying flat files (for example, an operating system copy utility, ftp, the DBMS_FILE_TRANSFER package, or publishing on CDs).

Step 5: Plug In the Tablespace Set

Plug in the tablespaces and integrate the structural information using the Import utility, imp:

IMP system/password FILE=expdat.dmp  DATAFILES=/ica_salesdb/ica_sales_101.dbf,/ica_salesdb/ica_sales_201.dbf

   REMAP_SCHEMA=(smith:sami) REMAP_SCHEMA=(williams:john)

The REMAP_SCHEMA parameter changes the ownership of database objects. If you do not specify REMAP_SCHEMA, all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the target database. If they do not exist, then the import utility returns an error. In this example, objects in the tablespace set owned by smith in the source database will be owned by sami in the target database after the tablespace set is plugged in. Similarly, objects owned by williams in the source database will be owned by john in the target database. In this case, the target 

Page 28: Oracle 10g DBA

database is not required to have users smith and williams, but must have users sami and john.

After this statement executes successfully, all tablespaces in the set being copied remain in read-only mode. Check the import logs to ensure that no error has occurred.

Now, put the tablespaces into read/write mode as follows:

ALTER TABLESPACE ica_sales_1 READ WRITE;

ALTER TABLESPACE ica_sales_2 READ WRITE;

 

Viewing Information about Tablespaces and Datafiles

Oracle has provided many Data dictionaries to view information about tablespaces and datafiles.  Some of them are:

To view information about Tablespaces in a database give the following query

SQL>select * from dba_tablespaces SQL>select * from v$tablespace;

To view information about Datafiles

SQL>select * from dba_data_files;SQL>select * from v$datafile;

To view information about Tempfiles

SQL>select * from dba_temp_files;SQL>select * from v$tempfile;

To view information about free space in datafiles

SQL>select * from dba_free_space;

To view information about free space in tempfiles

SQL>select * from V$TEMP_SPACE_HEADER;

 

Page 29: Oracle 10g DBA

Relocating or Renaming Datafiles

You can rename datafiles to either change their names or relocate them.

Renaming or Relocating Datafiles belonging to a Single Tablespace

To rename or relocate datafiles belonging to a Single Tablespace do the following.

1.       Take the tablespace offline

2.       Rename or Relocate the datafiles using operating system command

3.       Give the ALTER TABLESPACE with RENAME DATAFILE option to change the filenames within the Database.

4.       Bring the tablespace Online

For Example suppose you have a tablespace users with the following datafiles

        /u01/oracle/ica/usr01.dbf’         /u01/oracle/ica/usr02.dbf’

Now you want to relocate /u01/oracle/ica/usr01.dbf’  to ‘/u02/oracle/ica/usr01.dbf’ and want to rename ‘/u01/oracle/ica/usr02.dbf’ to‘/u01/oracle/ica/users02.dbf’  then follow the given the steps

1.       Bring the tablespace offline

SQL> alter tablespace users offline;

2.       Copy the file to new location using o/s command.

$cp /u01/oracle/ica/usr01.dbf  /u02/oracle/ica/usr01.dbf’

Rename the file ‘/u01/oracle/ica/usr02.dbf’ to ‘/u01/oracle/ica/users02.dbf’ using o/s command.

$mv  /u01/oracle/ica/usr02.dbf /u01/oracle/ica/users02.dbf

Page 30: Oracle 10g DBA

3.       Now start SQLPLUS and type the following command to rename and relocate these files

SQL> alter tablespace users rename file    ‘/u01/oracle/ica/usr01.dbf’, ‘/u01/oracle/ica/usr02.dbf’ to                        ‘/u02/oracle/ica/usr01.dbf’,’/u01/oracle/ica/users02.dbf’;

4.       Now bring the tablespace  Online

SQL> alter tablespace users online;

Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces

You can rename and relocate datafiles in one or more tablespaces using the ALTER DATABASE RENAME FILE statement. This method is the only choice if you want to rename or relocate datafiles of several tablespaces in one operation. You must have the ALTER DATABASE system privilege

To rename datafiles in multiple tablespaces, follow these steps.

1.      Ensure that the database is mounted but closed.

2.      Copy the datafiles to be renamed to their new locations and new names, using the operating system..

3.      Use ALTER DATABASE to rename the file pointers in the database control file.

For example, the following statement renames the datafiles/u02/oracle/rbdb1/sort01.dbf and /u02/oracle/rbdb1/user3.dbf to /u02/oracle/rbdb1/temp01.dbfand /u02/oracle/rbdb1/users03.dbf, respectively:

ALTER DATABASE    RENAME FILE '/u02/oracle/rbdb1/sort01.dbf',                '/u02/oracle/rbdb1/user3.dbf'             TO '/u02/oracle/rbdb1/temp01.dbf',                '/u02/oracle/rbdb1/users03.dbf; 

Page 31: Oracle 10g DBA

Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular, specify the old datafile names exactly as they appear in theDBA_DATA_FILES view.

4.      Back up the database. After making any structural changes to a database, always perform an immediate and complete backup.

5.      Start the Database

Managing REDO LOGFILES

Every Oracle database must have at least 2 redo logfile groups. Oracle writes all statements except, SELECT statement, to the logfiles. This is done because Oracle performs deferred batch writes i.e. it does write changes to disk per statement instead it performs write in batches. So in this case if a user updates a row, Oracle will change the row in db_buffer_cache and records the statement in the logfile and give the message to the user that  row is updated. Actually the row is not yet written back to the datafile but still it give the message to the user that row is updated. After 3 seconds the row is actually written to the datafile. This is known as deferred batch writes. 

Since Oracle defers writing to the datafile there is chance of power failure or system crash before the row is written to the disk. That’s why Oracle writes the statement in redo logfile so that in case of power failure or system crash oracle can re-execute the statements next time when you open the database.

Adding a New Redo Logfile Group

To add a new Redo Logfile group to the database give the following command

Page 32: Oracle 10g DBA

SQL>alter database add logfile group 3 ‘/u01/oracle/ica/log3.ora’ size 10M;

Note: You can add groups to a database up to the MAXLOGFILES setting you have specified at the time of creating the database. If you want to change MAXLOGFILE setting you have to create a new controlfile.

Adding Members to an existing group

To add new member to an existing group give the following command

SQL>alter database add logfile member ‘/u01/oracle/ica/log11.ora’ to group 1;

Note: You can add members to a group up to the MAXLOGMEMBERS setting you have specified at the time of creating the database. If you want to change MAXLOGMEMBERS setting you have create a new controlfile

Important: Is it strongly recommended that you multiplex logfiles i.e. have at least two log members, one member in one disk and another in second disk, in a database.

Dropping Members from a group

You can drop member from a log group only if the group is having more than one member and if it is not the current group. If you want to drop members from the current group, force a log switch or wait so that log switch occurs and another group becomes current. To force a log switch give the following command

SQL>alter system switch logfile;

The following command can be used to drop a logfile member

SQL>alter database drop logfile member ‘/u01/oracle/ica/log11.ora’;

Page 33: Oracle 10g DBA

Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.

Dropping Logfile Group

Similarly, you can also drop logfile group only if the database is having more than two groups and if it is not the current group.

SQL>alter database drop logfile group 3;

Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.

Resizing Logfiles

You cannot resize logfiles. If you want to resize a logfile create a new logfile group with the new size and subsequently drop the old logfile group.

Renaming or Relocating Logfiles

To Rename or Relocate Logfiles perform the following steps

For Example, suppose you want to move a logfile from ‘/u01/oracle/ica/log1.ora’ to ‘/u02/oracle/ica/log1.ora’, then do the following

Steps

1.      Shutdown the database

SQL>shutdown immediate;

 

2.      Move the logfile from Old location to new location using operating system command

Page 34: Oracle 10g DBA

$mv /u01/oracle/ica/log1.ora  /u02/oracle/ica/log1.ora

 

3.      Start and mount the database

SQL>startup mount

 

4.      Now give the following command to change the location in controlfile

SQL>alter database rename file ‘/u01/oracle/ica/log1.ora’ to ‘/u02/oracle/ica/log2.ora’;

5.      Open the database

SQL>alter database open;

 

Clearing REDO LOGFILES

A redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation the ALTER DATABASE CLEAR LOGFILE statement can be used reinitialize the file without shutting down the database.

The following statement clears the log files in redo log group number 3:

ALTER DATABASE CLEAR LOGFILE GROUP 3;

 

This statement overcomes two situations where dropping redo logs is not possible:

      If there are only two log groups

      The corrupt redo log file belongs to the current group

Page 35: Oracle 10g DBA

If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

 

This statement clears the corrupted redo logs and avoids archiving them. The cleared redo logs are available for use even though they were not archived.

If you clear a log file that is needed for recovery of a backup, then you can no longer recover from that backup. The database writes a message in the alert log describing the backups from which you cannot recover

Viewing Information About Logfiles

To See how many logfile groups are there and their status type the following query.

SQL>SELECT * FROM V$LOG;

GROUP# THREAD#   SEQ   BYTES  MEMBERS  ARC STATUS     FIRST_CHANGE# FIRST_TIM------ ------- ----- -------  -------  --- ---------  ------------- ---------     1       1 20605 1048576        1  YES ACTIVE          61515628 21-JUN-07     2       1 20606 1048576        1  NO  CURRENT         41517595 21-JUN-07     3       1 20603 1048576        1  YES INACTIVE        31511666 21-JUN-07     4       1 20604 1048576        1  YES INACTIVE       21513647 21-JUN-07

 

To See how many members are there and where they are located give the following query

SQL>SELECT * FROM V$LOGFILE;

GROUP#   STATUS  MEMBER------  -------  ----------------------------------     1           /U01/ORACLE/ICA/LOG1.ORA

Page 36: Oracle 10g DBA

     2           /U01/ORACLE/ICA/LOG2.ORA

Managing Control Files

Every Oracle Database has a control file, which is a small binary file that records the physical structure of the database. The control file includes:

         The database name

         Names and locations of associated datafiles and redo log files

         The timestamp of the database creation

         The current log sequence number

         Checkpoint information

It is strongly recommended that you multiplex control files  i.e. Have at least two control files one in one hard disk and another one located in another disk, in a database.  In this way if control file becomes corrupt in one disk the another copy will be available and you don’t have to do recovery of control file.

You can  multiplex control file at the time of creating a database and later on also. If you have not multiplexed control file at the time of creating a database you can do it now by following given procedure.

Multiplexing Control File

Steps:

      1.      Shutdown the Database.

SQL>SHUTDOWN IMMEDIATE;

 

Page 37: Oracle 10g DBA

    2.      Copy the control file from old location to new location using operating system command. For example.

$cp /u01/oracle/ica/control.ora  /u02/oracle/ica/control.ora

 

    3.      Now open the parameter file and specify the new location like this

CONTROL_FILES=/u01/oracle/ica/control.ora

Change it to

CONTROL_FILES=/u01/oracle/ica/control.ora,/u02/oracle/ica/control.ora

 

    4.      Start the Database

Now Oracle will start updating both the control files and, if one control file is lost you can copy  it from another location.

Changing the Name of a Database

If you ever want to change the name of database or want to change the setting of MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS then you have to create a new control file.

Creating A New Control File

Follow the given steps to create a new controlfile

Steps

1.      First generate the create controlfile statement

Page 38: Oracle 10g DBA

SQL>alter database backup controlfile to trace;

After giving this statement oracle will write the CREATE CONTROLFILE statement in a trace file. The trace file will be randomly named something like ORA23212.TRC and it is created in USER_DUMP_DEST directory.

2.      Go to the USER_DUMP_DEST directory and open the latest trace file in text editor. This file will contain the CREATE CONTROLFILE statement. It will have two sets of statement one with RESETLOGS and another without RESETLOGS. Since we are changing the name of the Database we have to use RESETLOGS option of CREATE CONTROLFILE statement. Now copy and paste the statement in a file. Let it be c.sql

 

3.      Now open the c.sql file in text editor and set the database name from ica to prod shown in an example below

CREATE CONTROLFILE

   SET DATABASE prod   

   LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log',

                    '/u01/oracle/ica/redo01_02.log'),

           GROUP 2 ('/u01/oracle/ica/redo02_01.log',

                    '/u01/oracle/ica/redo02_02.log'),

           GROUP 3 ('/u01/oracle/ica/redo03_01.log',

                    '/u01/oracle/ica/redo03_02.log')

   RESETLOGS

   DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M,

            '/u01/oracle/ica/rbs01.dbs' SIZE 5M,

            '/u01/oracle/ica/users01.dbs' SIZE 5M,

            '/u01/oracle/ica/temp01.dbs' SIZE 5M

Page 39: Oracle 10g DBA

   MAXLOGFILES 50

   MAXLOGMEMBERS 3

   MAXLOGHISTORY 400

   MAXDATAFILES 200

   MAXINSTANCES 6

   ARCHIVELOG;

 

4.      Start and do not mount the database.

SQL>STARTUP NOMOUNT;

 

5.      Now execute c.sql script

SQL> @/u01/oracle/c.sql

 

6.      Now open the database with RESETLOGS

SQL>ALTER DATABASE OPEN RESETLOGS;

Cloning an Oracle Database.

You have a Production database running in one server. The company management wants to develop some new modules and they have hired some programmers to do that. Now these programmers require access to the Production database and they want to make changes to it. You as a DBA can’t give direct access to Production database so you want to create a copy of this database on another server and wants to give developers access to it.

Let us see an example of cloning a database

Page 40: Oracle 10g DBA

We have a database running the production server with the following files

PARAMETER FILE located in /u01/oracle/ica/initica.ora

CONTROL FILES=/u01/oracle/ica/control.oraBACKGROUND_DUMP_DEST=/u01/oracle/ica/bdumpUSER_DUMP_DEST=/u01/oracle/ica/udumpCORE_DUMP_DEST=/u01/oracle/ica/cdumpLOG_ARCHIVE_DEST_1=”location=/u01/oracle/ica/arc1”

DATAFILES =

     /u01/oracle/ica/sys.dbf     /u01/oracle/ica/usr.dbf     /u01/oracle/ica/rbs.dbf     /u01/oracle/ica/tmp.dbf     /u01/oracle/ica/sysaux.dbf

LOGFILE=

     /u01/oracle/ica/log1.ora     /u01/oracle/ica/log2.ora

Now you want to copy this database to SERVER  2 and in SERVER 2 you don’t have /u01 filesystem. In SERVER 2 you have /d01 filesystem.

To Clone this Database on SERVER 2 do the following.

Steps :-

1.      In SERVER 2 install the same version of o/s and same version Oracle as in SERVER 1.

 

Page 41: Oracle 10g DBA

2.      In SERVER 1 generate CREATE CONTROLFILE statement by typing the following command

 

SQL>alter database backup controlfile to trace;

 

Now, go to the USER_DUMP_DEST directory and open the latest trace file. This file will contain steps and as well as CREATE CONTROLFILE statement. Copy the CREATECONTROLFILE statement and paste in a file. Let the filename be cr.sql

 

The CREATE CONTROLFILE Statement will look like this.

CREATE CONTROLFILE

   SET DATABASE prod   

   LOGFILE GROUP 1 ('/u01/oracle/ica/log1.ora'

           GROUP 2 ('/u01/oracle/ica/log2.ora'

  DATAFILE '/u01/oracle/ica/sys.dbf' SIZE 300M,

            '/u01/oracle/ica/rbs.dbf' SIZE 50M,

            '/u01/oracle/ica/usr.dbf' SIZE 50M,

            '/u01/oracle/ica/tmp.dbf' SIZE 50M,

            ‘/u01/oracle/ica/sysaux.dbf’ size 100M;

   MAXLOGFILES 50

   MAXLOGMEMBERS 3

   MAXLOGHISTORY 400

   MAXDATAFILES 200

   MAXINSTANCES 6

   ARCHIVELOG;

Page 42: Oracle 10g DBA

 

3.      In SERVER 2 create the following directories

$cd /d01/oracle

$mkdir ica

$mkdir arc1

$cd ica

$mkdir bdump udump cdump

 

Shutdown the database on SERVER 1 and transfer all datafiles, logfiles and  control file to SERVER 2 in /d01/oracle/ica directory.

 

Copy parameter file to SERVER 2 in /d01/oracle/dbs directory and copy all archive log files to SERVER 2 in /d01/oracle/ica/arc1 directory. Copy the cr.sql script file to /d01/oracle/ica directory.

 

4.      Open the parameter file SERVER 2 and change the following parameters

 

CONTROL FILES=//d01/oracle/ica/control.oraBACKGROUND_DUMP_DEST=//d01/oracle/ica/bdumpUSER_DUMP_DEST=//d01/oracle/ica/udumpCORE_DUMP_DEST=//d01/oracle/ica/cdumpLOG_ARCHIVE_DEST_1=”location=//d01/oracle/ica/arc1”

Page 43: Oracle 10g DBA

5.      Now, open the cr.sql file in text editor and change the locations like this

CREATE CONTROLFILE

   SET DATABASE prod   

   LOGFILE GROUP 1 ('//d01/oracle/ica/log1.ora'

           GROUP 2 ('//d01/oracle/ica/log2.ora'

  DATAFILE '//d01/oracle/ica/sys.dbf' SIZE 300M,

            '//d01/oracle/ica/rbs.dbf' SIZE 50M,

            '//d01/oracle/ica/usr.dbf' SIZE 50M,

            '//d01/oracle/ica/tmp.dbf' SIZE 50M,

            ‘//d01/oracle/ica/sysaux.dbf’ size 100M;

   MAXLOGFILES 50

   MAXLOGMEMBERS 3

   MAXLOGHISTORY 400

   MAXDATAFILES 200

   MAXINSTANCES 6

   ARCHIVELOG;

In SERVER 2 export ORACLE_SID environment variable and start the instance

$export ORACLE_SID=ica

$sqlplus

Enter User:/ as sysdba

SQL> startup nomount;

6.      Run cr.sql script to create the controlfile

SQL>@/d01/oracle/ica/cr.sql

Page 44: Oracle 10g DBA

7.      Open the database

SQL>alter database open;

Managing the UNDO TABLESPACE

Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo.

Undo records are used to:

Roll back transactions when a ROLLBACK statement is issued

Recover the database

Provide read consistency

Analyze data as of an earlier point in time by using Flashback Query

Recover from logical corruptions using Flashback features

Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced automatic undo management, which simplifies undo space management by eliminating the complexities associated with rollback segment management. Oracle strongly recommends that you use undo tablespace to manage undo rather than rollback segments.

Switching to Automatic Management of Undo Space

To go for automatic management of undo space set the following parameter.

Steps:-

1. If you have not created an undo tablespace at the time of creating a database then, create an undo tablespace by typing the following command

Page 45: Oracle 10g DBA

SQL>create undo tablespace myundo datafile           ‘/u01/oracle/ica/undo_tbs.dbf’ size 500M                        autoextend ON next 5M ;

 

When the system is first running in the production environment, you may be unsure of the space requirements of the undo tablespace. In this case, you can enable automatic extension for datafiles of the undo tablespace so that they automatically increase in size when more space is needed

 

2. Shutdown the Database and set the following parameters in parameter file.

UNDO_MANAGEMENT=AUTOUNDO_TABLESPACE=myundo

3. Start the Database.

 

Now Oracle Database will use Automatic Undo Space Management.

Calculating the Space Requirements For Undo Retention

You can calculate space requirements manually using the following formula:

UndoSpace = UR * UPS + overhead

where:

UndoSpace is the number of undo blocks

UR is UNDO_RETENTION in seconds. This value should take into consideration long-running queries and any flashback requirements.

UPS is undo blocks for each second

Page 46: Oracle 10g DBA

overhead is the small overhead for metadata (transaction tables, bitmaps, and so forth)

As an example, if UNDO_RETENTION is set to 3 hours, and the transaction rate (UPS) is 100 undo blocks for each second, with a 8K block size, the required undo space is computed as follows:

(3 * 3600 * 100 * 8K) = 8.24GBs

To get the values for UPS, Overhead query the V$UNDOSTAT view. By giving the following statement

SQL> Select * from V$UNDOSTAT;

Altering UNDO Tablespace

If the Undo tablespace is full, you can resize existing datafiles or add new datafiles to it

The following example extends an existing datafile

SQL> alter database datafile ‘/u01/oracle/ica/undo_tbs.dbf’ resize 700M

The following example adds a new datafile to undo tablespace

 

SQL> ALTER TABLESPACE myundo

     ADD DATAFILE '/u01/oracle/ica/undo02.dbf' SIZE 200M AUTOEXTEND ON                    NEXT 1M MAXSIZE UNLIMITED;

 

Dropping an Undo Tablespace

Use the DROP TABLESPACE statement to drop an undo tablespace. The following example drops the undo tablespace undotbs_01:

SQL> DROP TABLESPACE myundo;

An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for

Page 47: Oracle 10g DBA

example, a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails.

Switching Undo Tablespaces

You can switch from using one undo tablespace to another. Because the UNDO_TABLESPACE initialization parameter is a dynamic parameter, the ALTER SYSTEM SET statement can be used to assign a new undo tablespace.

The following statement switches to a new undo tablespace:

ALTER SYSTEM SET UNDO_TABLESPACE = myundo2;

 

Assuming myundo is the current undo tablespace, after this command successfully executes, the instance uses myundo2 in place of myundo as its undo tablespace.

Viewing Information about Undo Tablespace

To view statistics for tuning undo tablespace query the following dictionary

SQL>select * from v$undostat;

To see how many active Transactions are there and to see undo segment information give the following command

SQL>select * from v$transaction;

To see the sizes of extents in the undo tablespace give the following query

SQL>select * from DBA_UNDO_EXTENTS;

SQL Loader

SQL LOADER utility is used to load data from other data source into Oracle. For example, if you have a table in FOXPRO, ACCESS or SYBASE or any other third party database, you can use SQL Loader to load the data into Oracle Tables. SQL Loader will only read the data from Flat files. So If you want to load the data from Foxpro or any other database, you have to first convert that data into Delimited Format flat

Page 48: Oracle 10g DBA

file or Fixed length format flat file, and then use SQL loader to load the data into Oracle.

Following is procedure to load the data from Third Party Database into Oracle using SQL Loader.

1. Convert the Data into Flat file using third party database command.

2. Create the Table Structure in Oracle Database using appropriate datatypes

3. Write a Control File, describing how to interpret the flat file and options to load the data.

4. Execute SQL Loader utility specifying the control file in the command line argument

To understand it better let us see the following case study.

CASE STUDY (Loading Data from MS-ACCESS to Oracle)

Suppose you have a table in MS-ACCESS by name EMP, running under Windows O/S, with the following structure

     EMPNO     INTEGER     NAME       TEXT(50)     SAL        CURRENCY     JDATE      DATE

This table contains some 10,000 rows. Now you want to load the data from this table into an Oracle Table. Oracle Database is running in LINUX O/S.

Solution

Steps

Start MS-Access and convert the table into comma delimited flat (popularly known as csv) , by clicking on File/Save As menu. Let the delimited file name be emp.csv

1. Now transfer this file to Linux Server using FTP command

Page 49: Oracle 10g DBA

a. Go to Command Prompt in windows

b. At the command prompt type FTP followed by IP address of the server running Oracle.

FTP will then prompt you for username and password to connect to the Linux Server. Supply a valid username and password of Oracle User in Linux

For example:-C:\>ftp 200.200.100.111

Name: oracle

Password:oracle

FTP>

c. Now give PUT command to transfer file from current Windows machine to Linux machine.

FTP>put

Local file:C:\>emp.csvremote-file:/u01/oracle/emp.csv

File transferred in 0.29 SecondsFTP>

d. Now after the file is transferred quit the FTP utility by typing bye command.

FTP>bye

Good-Bye

2. Now come the Linux Machine and create a table in Oracle with the same structure as in MS-ACCESS by taking appropriate datatypes. For example,  create a table like this

$sqlplus scott/tiger

Page 50: Oracle 10g DBA

SQL>CREATE TABLE emp (empno number(5),

                name varchar2(50),

                sal  number(10,2),

                jdate date);

3. After creating the table, you have to write a control file describing the actions which SQL Loader should do. You can use any text editor to write the control file. Now let us write a controlfile for our case study

$vi emp.ctl

1        LOAD DATA

2        INFILE     ‘/u01/oracle/emp.csv’

3        BADFILE         ‘/u01/oracle/emp.bad’

4        DISCARDFILE     ‘/u01/oracle/emp.dsc’

5        INSERT INTO TABLE emp

6        FIELDS TERMINATED BY “,” OPTIONALLY ENCLOSED BY ‘”’ TRAILING NULLCOLS

7        (empno,name,sal,jdate date ‘mm/dd/yyyy’)

Notes: (Do not write the line numbers, they are meant for explanation purpose)

1.       The LOAD DATA statement is required at the beginning of the control file.

2.       The INFILE option specifies where the input file is located

3.       Specifying BADFILE is optional. If you specify,  then bad records found during loading will be stored in this file.

4.       Specifying DISCARDFILE is optional. If you specify, then records which

do not meet a WHEN condition will be written to this file.

5.       You can use any of the following loading option

Page 51: Oracle 10g DBA

1.       INSERT : Loads rows only if the target table is empty

2.       APPEND: Load rows if the target table is empty or not.

3.       REPLACE: First deletes all the rows in the existing table and then, load rows.

4.       TRUNCATE: First truncates the table and then load rows.

6.       This line indicates how the fields are separated in input file. Since in our case the fields are separated by “,” so we have specified “,” as the terminating char for fields. You can replace this by any char which is used to terminate fields. Some of the popularly use terminating characters are semicolon “;”, colon “:”,

pipe “|” etc. TRAILING NULLCOLS means if the last column is null

then treat this as null value, otherwise,  SQL LOADER will treat the record as bad if the last column is null.

7.        In this line specify the columns of the target table. Note how do you specify format for Date columns

4. After you have wrote the control file save it and then, call SQL Loader utility by typing the following command

$sqlldr userid=scott/tiger control=emp.ctl log=emp.log

After you have executed the above command SQL Loader will shows you the output describing how many rows it has loaded.

The LOG option of sqlldr specifies where the log file of this sql loader session should be created.  The log file contains all actions which SQL loader has performed i.e. how many rows were loaded, how many were rejected and how much time is taken to load the rows and etc. You have to view this file for any errors encountered while running SQLLoader.

CASE STUDY (Loading Data from Fixed Length file into Oracle)

Suppose we have a fixed length format file containing employees data, as shown below, and wants to load this data into an Oracle table.

7782 CLARK      MANAGER   7839  2572.50          10

7839 KING       PRESIDENT       5500.00          10

Page 52: Oracle 10g DBA

7934 MILLER     CLERK     7782   920.00          10

7566 JONES      MANAGER   7839  3123.75          20

7499 ALLEN      SALESMAN  7698  1600.00   300.00 30

7654 MARTIN     SALESMAN  7698  1312.50  1400.00 30

7658 CHAN       ANALYST   7566  3450.00          20

7654 MARTIN     SALESMAN  7698  1312.50  1400.00 30

 

SOLUTION:

Steps :-

1.      First Open the file in a text editor and count the length of fields, for example in our fixed length file, employee number is from 1st  position to 4th  position, employee name is from 6th  position to 15th  position, Job name is from 17th  position to 25th  position. Similarly other columns are also located.

2.      Create a table in Oracle, by any name, but should  match columns specified in fixed length file. In our case give the following command to create the table.

 

 

SQL> CREATE TABLE emp (empno  NUMBER(5),

        name VARCHAR2(20),

        job  VARCHAR2(10),

        mgr  NUMBER(5),

        sal  NUMBER(10,2),

        comm NUMBER(10,2),

        deptno     NUMBER(3) );

Page 53: Oracle 10g DBA

                         

3.      After creating the table, now write a control file by using any text editor

$vi empfix.ctl

1)   LOAD DATA

2)   INFILE '/u01/oracle/fix.dat'

3)   INTO TABLE emp

4)   (empno         POSITION(01:04)   INTEGER EXTERNAL,

       name         POSITION(06:15)   CHAR,

       job          POSITION(17:25)   CHAR,

       mgr          POSITION(27:30)   INTEGER EXTERNAL,

       sal          POSITION(32:39)   DECIMAL EXTERNAL,

       comm         POSITION(41:48)   DECIMAL EXTERNAL,

5)   deptno         POSITION(50:51)   INTEGER EXTERNAL)

 

Notes:

(Do not write the line numbers, they are meant for explanation purpose)

1.       The LOAD DATA statement is required at the beginning of the control file.

Page 54: Oracle 10g DBA

2.       The name of the file containing data follows the INFILE parameter.

3.       The INTO TABLE statement is required to identify the table to be loaded into.

4.       Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into that column. empno, name, job, and so on are names of columns in table emp. The datatypes (INTEGER EXTERNAL, CHAR, DECIMAL EXTERNAL) identify the datatype of data fields in the file, not of corresponding columns in the emp table.

5.       Note that the set of column specifications is enclosed in parentheses.

 

4.      After saving the control file now start SQL Loader utility by typing the following command.

 

$sqlldr userid=scott/tiger control=empfix.ctl log=empfix.log direct=y

After you have executed the above command SQL Loader will shows you the output describing how many rows it has loaded.

Loading Data into Multiple Tables using WHEN condition

You can simultaneously load data into multiple tables in the same session. You can also use WHEN condition to load only specified rows which meets a particular condition (only equal to “=” and not equal to “<>” conditions are allowed).

For example, suppose we have a fixed length file as shown below

7782 CLARK      MANAGER   7839  2572.50          10

7839 KING       PRESIDENT       5500.00          10

7934 MILLER     CLERK     7782   920.00          10

7566 JONES      MANAGER   7839  3123.75          20

7499 ALLEN      SALESMAN  7698  1600.00   300.00 30

7654 MARTIN     SALESMAN  7698  1312.50  1400.00 30

Page 55: Oracle 10g DBA

7658 CHAN       ANALYST   7566  3450.00          20

7654 MARTIN     SALESMAN  7698  1312.50  1400.00 30

 

Now we want to load all the employees whose deptno is 10 into emp1 table and those employees whose deptno is not equal to 10 in emp2 table. To do this first create the tablesemp1 and emp2 by taking appropriate columns and datatypes. Then, write a control file as shown below

$vi emp_multi.ctl

Load Datainfile ‘/u01/oracle/empfix.dat’append into table scott.emp1 WHEN (deptno=’10 ‘)  (empno        POSITION(01:04)   INTEGER EXTERNAL,

   name         POSITION(06:15)   CHAR,

   job          POSITION(17:25)   CHAR,

   mgr          POSITION(27:30)   INTEGER EXTERNAL,

   sal          POSITION(32:39)   DECIMAL EXTERNAL,

   comm         POSITION(41:48)   DECIMAL EXTERNAL,

   deptno       POSITION(50:51)   INTEGER EXTERNAL)

    INTO TABLE scott.emp2   WHEN (deptno<>’10 ‘)  (empno        POSITION(01:04)   INTEGER EXTERNAL,

Page 56: Oracle 10g DBA

   name         POSITION(06:15)   CHAR,

   job          POSITION(17:25)   CHAR,

   mgr          POSITION(27:30)   INTEGER EXTERNAL,

   sal          POSITION(32:39)   DECIMAL EXTERNAL,

   comm         POSITION(41:48)   DECIMAL EXTERNAL,

   deptno       POSITION(50:51)   INTEGER EXTERNAL)

 

After saving the file emp_multi.ctl run sqlldr$sqlldr userid=scott/tiger control=emp_multi.ctl

Conventional Path Load and Direct Path Load.

SQL Loader can load the data into Oracle database using Conventional Path method or Direct Path method. You can specify the method by using DIRECT command line option. If you give DIRECT=TRUE then SQL loader will use Direct Path Loading otherwise, if omit this option or specify DIRECT=false, then SQL Loader will use Conventional Path loading method.

Conventional PathConventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database tables.

When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer resources. This can slow the load significantly. Extra overhead is added as SQL statements are generated, passed to Oracle, and executed.

Page 57: Oracle 10g DBA

The Oracle database looks for partially filled blocks and attempts to fill them on each insert. Although appropriate during normal use, this can slow bulk loads dramatically.

Direct Path

In Direct Path Loading, Oracle will not use SQL INSERT statement for loading rows. Instead it directly writes the rows, into fresh blocks beyond High Water Mark, in datafiles i.e. it does not scan for free blocks before high water mark. Direct Path load is very fast because

Partial blocks are not used, so no reads are needed to find them, and fewer writes are performed.

SQL*Loader need not execute any SQL INSERT statements; therefore, the processing load on the Oracle database is reduced.

A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases them when the load is finished. A conventional path load calls Oracle once for each array of rows to process a SQL INSERT statement.

A direct path load uses multiblock asynchronous I/O for writes to the database files.

During a direct path load, processes perform their own write I/O, instead of using Oracle's buffer cache. This minimizes contention with other Oracle users.

Restrictions on Using Direct Path Loads

The following conditions must be satisfied for you to use the direct path load method:

Tables are not clustered.

Tables to be loaded do not have any active transactions pending.

Loading a parent table together with a child Table

Loading BFILE columns

Export and Import

These tools are used to transfer data from one oracle database to another oracle database. You Export tool to export data from source database,

Page 58: Oracle 10g DBA

and Import tool to load data into the target database. When you export tables from source database export tool will extracts the tables and puts it into the dump file. This dump file is transferred to the target database. At the target database the Import tool will copy the data from dump file to the target database.

From Ver. 10g Oracle is recommending to use Data Pump Export and Import tools, which are enhanced versions of original Export and Import tools.

The export dump file contains objects in the following order:

1. Type definitions2. Table definitions3. Table data4. Table indexes5. Integrity constraints, views, procedures, and triggers6. Bitmap, function-based, and domain indexes

When you import the tables the import tool will perform the actions in the following order, new tables are created, data is imported and indexes are built, triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, function-based, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data

Invoking Export and Import

You can run Export and Import tool in two modes

            Command Line Mode

            Interactive Mode

When you just type exp or imp at o/s prompt it will run in interactive mode i.e. these tools will prompt you for all the necessary input. If you supply command line arguments when calling exp or imp then it will run in command line mode

Page 59: Oracle 10g DBA

Command Line Parameters of Export tool

You can control how Export runs by entering the EXP command followedby various arguments. To specify parameters, you use keywords:

     Format:  EXP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)

     Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR)

               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

 

 

Keyword                               Description (Default)                    

--------------------------------------------------------------

USERID                                 username/password                     

BUFFER                                                size of data buffer                         

FILE                                         output files (EXPDAT.DMP)       

COMPRESS                          import into one extent (Y)         

GRANTS                               export grants (Y)                             

INDEXES                               export indexes (Y)                          

DIRECT                                  direct path (N)                                 

LOG                                       log file of screen output               

ROWS                                   export data rows (Y)                      

CONSISTENT                     cross-table consistency(N)         

 FULL                                    export entire file (N)

Page 60: Oracle 10g DBA

 OWNER                               list of owner usernames

TABLES                                 list of table names

 RECORDLENGTH              length of IO record

 INCTYPE                              incremental export type

 RECORD                              track incr. export (Y)

TRIGGERS                            export triggers (Y)

 STATISTICS                         analyze objects (ESTIMATE)

 PARFILE                               parameter filename

CONSTRAINTS                   export constraints (Y)OBJECT_CONSISTENT    transaction set to read only during object export (N)

FEEDBACK                           display progress every x rows (0)

FILESIZE                                maximum size of each dump file

FLASHBACK_SCN             SCN used to set session snapshot back to

FLASHBACK_TIME           time used to get the SCN closest to the specified time

QUERY                                  select clause used to export a subset of a table

RESUMABLE                       suspend when a space related error is encountered(N)

RESUMABLE_NAME       text string used to identify resumable statement

RESUMABLE_TIMEOUT    wait time for RESUMABLE

TTS_FULL_CHECK             perform full or partial dependency check for TTS

Page 61: Oracle 10g DBA

TABLESPACES                    list of tablespaces to export

TRANSPORT_TABLESPACE           export transportable tablespace metadata (N)

TEMPLATE                           template name which invokes iAS mode export

The Export and Import tools support four modes of operation

                        FULL               :Exports all the objects in all schemas                        OWNER             :Exports objects only belonging to the given OWNER                        TABLES           :Exports Individual Tables                         TABLESPACE  :Export all objects located in a given TABLESPACE.

Example of Exporting Full Database

The following example shows how to export full database

$exp USERID=scott/tiger FULL=y FILE=myfull.dmp

In the above command, FILE option specifies the name of the dump file, FULL option specifies that you want to export the full database, USERID option specifies the user account to connect to the database. Note, to perform full export the user should have DBA or EXP_FULL_DATABASE privilege. 

 Example of Exporting Schemas

To export Objects stored in a particular schemas you can run export utility with the following arguments

$exp USERID=scott/tiger OWNER=(SCOTT,ALI) FILE=exp_own.dmp

The above command will export all the objects stored in SCOTT and ALI’s schema.

Page 62: Oracle 10g DBA

Exporting Individual Tables

To export individual tables give the following command

$exp USERID=scott/tiger TABLES=(scott.emp,scott.sales) FILE=exp_tab.dmp

This will export scott’s emp and sales tables.

Exporting Consistent Image of the tables

If you include CONSISTENT=Y option in export command argument then, Export utility will export a consistent image of the table i.e. the changes which are done to the table during export operation will not be exported.

Using Import Utility

Objects exported by export utility can only be imported by Import utility. Import utility can  run in Interactive mode or command line mode.

You can let Import prompt you for parameters by entering the IMP command followed by your username/password:

     Example: IMP SCOTT/TIGER

Or, you can control how Import runs by entering the IMP command followed

by various arguments. To specify parameters, you use keywords:

     Format:  IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)

     Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N

               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Page 63: Oracle 10g DBA

Keyword Description (Default)

USERID username/password

BUFFER size of data buffer

FILE input files (EXPDAT.DMP)

SHOW just list file contents (N)

IGNORE ignore create errors (N)

GRANTS import grants (Y)

INDEXES import indexes (Y)

ROWS import data rows (Y)

LOG log file of screen output

FULL import entire file (N)

FROMUSER list of owner usernames

TOUSER list of usernames

TABLES list of table names

RECORDLENGTH length of IO record

INCTYPE incremental import type

COMMIT commit array insert (N)

PARFILE parameter filename

CONSTRAINTS import constraints (Y)

DESTROY overwrite tablespace data file (N)

INDEXFILE write table/index info to specified file

SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N)

FEEDBACK display progress every x rows(0)

TOID_NOVALIDATE skip validation of specified type ids

FILESIZE maximum size of each dump file

STATISTICS import precomputed statistics (always)

Page 64: Oracle 10g DBA

RESUMABLE suspend when a space related error is encountered(N)

RESUMABLE_NAME text string used to identify resumable statement

RESUMABLE_TIMEOUT wait time for RESUMABLE

COMPILE compile procedures, packages, and functions (Y)

STREAMS_CONFIGURATION import streams general metadata (Y)

STREAMS_INSTANITATION import streams instantiation metadata (N)

Example Importing Individual Tables

To import individual tables from a full database export dump file give the following command

$imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(emp,dept)

This command will import only emp, dept tables into Scott user and you will get a output similar  to as shown below

Export file created by EXPORT:V10.00.00 via conventional path

import done in WE8DEC character set and AL16UTF16 NCHAR character set

. importing SCOTT's objects into SCOTT

. . importing table                         "DEPT"          4 rows imported

. . importing table                          "EMP"         14 rows imported

Import terminated successfully without warnings.

Page 65: Oracle 10g DBA

Example, Importing Tables of One User account into another User account

For example, suppose Ali has exported tables into a dump file mytables.dmp. Now Scott wants to import these tables. To achieve this Scott will give the following import command

$imp scott/tiger  FILE=mytables.dmp FROMUSER=ali TOUSER=scott

Then import utility will give a warning that tables in the dump file was exported by user Ali and not you and then proceed.

Example Importing Tables Using Pattern Matching

Suppose you want to import all tables from a dump file whose name matches a particular pattern. To do so, use “%” wild character in TABLES option. For example, the following command will import all tables whose names starts with alphabet “e” and those tables whose name contains alphabet “d”

$imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(a%,%d%)

Migrating a Database across platforms.

The Export and Import utilities are the only method that Oracle supports for moving an existing Oracle database from one hardware platform to another. This includes moving between UNIX and NT systems and also moving between two NT systems running on different platforms.

The following steps present a general overview of how to move a database between platforms.

1. As a DBA user, issue the following SQL query to get the exact name of all tablespaces. You will need this information later in the process.

SQL> SELECT tablespace_name FROM dba_tablespaces;

2. As a DBA user, perform a full export from the source database, for example:

Page 66: Oracle 10g DBA

> exp system/manager FULL=y FILE=myfullexp.dmp

3.     Move the dump file to the target database server. If you use FTP, be sure to copy it in binary format (by entering binary at the FTP prompt) to avoid file corruption.

4. Create a database on the target server.

5. Before importing the dump file, you must first create your tablespaces, using the information obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file structure as at the source database, which may not be compatible with the file structure on the target system.

6. As a DBA user, perform a full import with the IGNORE parameter enabled:

> imp system/manager FULL=y IGNORE=y FILE=myfullexp.dmp

Using IGNORE=y instructs Oracle to ignore any creation errors during the import and permit the import to complete.

7. Perform a full backup of your new database.

DATA PUMP Utility

Starting with Oracle 10g,  Oracle has introduced an enhanced version of EXPORT and IMPORT utility known as DATA PUMP. Data Pump is similar to EXPORT and IMPORT utility but it has many advantages. Some of the advantages are:

Most Data Pump export and import operations occur on the Oracle database server. i.e. all the dump files are created in the server even if you run the Data Pump utility from client machine. This results in increased performance because data is not transferred through network.

 

You can Stop and Re-Start export and import jobs. This is particularly useful if you have started an export or import job and after some time you want to do some other urgent work.

Page 67: Oracle 10g DBA

 

The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations.

 

The ability to estimate how much space an export job would consume, without actually performing the export

 

Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs

 

Using Data Pump Export Utility

To Use Data Pump,  DBA has to create a directory in Server Machine and create a Directory Object in the database mapping to the directory created in the file system.

The following example creates a directory in the filesystem and creates a directory object in the database and grants privileges on the Directory Object to the SCOTT user.

$mkdir my_dump_dir$sqlplus Enter User:/ as sysdbaSQL>create directory data_pump_dir as ‘/u01/oracle/my_dump_dir’;

Now grant access on this directory object to SCOTT user

SQL> grant read,write on directory data_pump_dir to scott;

Example of Exporting a Full Database

To Export Full Database, give the following command

Page 68: Oracle 10g DBA

$expdp  scott/tiger FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp             LOGFILE=myfullexp.log JOB_NAME=myfullJob

The above command will export the full database and it will create the dump file full.dmp in the directory on the server /u01/oracle/my_dump_dir

In some cases where the Database is in Terabytes the above command will not feasible since the dump file size will be larger than the operating system limit, and hence export will fail. In this situation you can create multiple dump files by typing the following command

$expdp  scott/tiger FULL=y DIRECTORY=data_pump_dir DUMPFILE=full%U.dmp       FILESIZE=5G  LOGFILE=myfullexp.log JOB_NAME=myfullJob

This will create multiple dump files named full01.dmp, full02.dmp, full03.dmp and so on. The FILESIZE parameter specifies how much larger the dump file should be.

Example of Exporting a Schema

To export all the objects of SCOTT’S schema you can run the following export data pump command.

$expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp         SCHEMAS=SCOTT

You can omit SCHEMAS since the default mode of Data Pump export is SCHEMAS only.

If you want to export objects of multiple schemas you can specify the following command

$expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dm

Page 69: Oracle 10g DBA

p         SCHEMAS=SCOTT,HR,ALI

Exporting Individual Tables using Data Pump Export

You can use Data Pump Export utility to export individual tables. The following example shows the syntax to export tables

$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp  

               TABLES=employees,jobs,departments

 

Exporting Tables located in a Tablespace

If you want to export tables located in a particular tablespace you can type the following command

 

$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp

TABLESPACES=tbs_4, tbs_5, tbs_6

 

The above will export all the objects located in tbs_4,tbs_5,tbs_6

 

Excluding and Including Objects during Export

You can exclude objects while performing a export by using EXCLUDE option of Data Pump utility. For example you are exporting a schema and don’t want to export tables whose name starts with “A” then you can type the following command

$expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp         SCHEMAS=SCOTT EXCLUDE=TABLE:”like ‘A%’”

Page 70: Oracle 10g DBA

Then all tables in Scott’s Schema whose name starts with “A “ will not be exported.

Similarly you can also INCLUDE option to only export certain objects like this

$expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp         SCHEMAS=SCOTT INCLUDE=TABLE:”like ‘A%’”

This is opposite of EXCLUDE option i.e. it will export only those tables of Scott’s schema whose name starts with “A”

Similarly you can also exclude INDEXES, CONSTRAINTS, GRANTS, USER, SCHEMA

Using Query to Filter Rows during Export

You can use QUERY option to export only required rows. For Example, the following will export only those rows of employees tables whose salary is above 10000 and whose dept id is 10.

 expdp hr/hr QUERY=emp:'"WHERE dept_id > 10 AND sal > 10000"'

        NOLOGFILE=y DIRECTORY=dpump_dir1 DUMPFILE=exp1.dmp

 

Suspending and Resuming Export Jobs (Attaching and Re-Attaching to the Jobs)

You can suspend running export jobs and later on resume these jobs or kill these jobs using Data Pump Export. You can start a job in one client machine and then, if because of some work, you can suspend it. Afterwards when your work has been finished you can continue the job from the same client, where you stopped the job, or you can restart the job from another client machine.

For Example, suppose a DBA starts a full database export by typing the following command at one client machine CLNT1 by typing the following command

Page 71: Oracle 10g DBA

$expdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir        DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob

After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter into interactive mode. Then he will get the Export> prompt where he can type interactive commands

Now he wants to stop this export job so he will type the following command

Export> STOP_JOB=IMMEDIATEAre you sure you wish to stop this job ([y]/n): y

The job is placed in a stopped state and exits the client.

After finishing his other work, the DBA wants to resume the export job and the client machine from where he actually started the job is locked because, the user has locked his/her cabin. So now the DBA will go to another client machine and he reattach to the job by typing the following command

$expdp hr/hr@mydb ATTACH=myfulljob

After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.

Export> CONTINUE_CLIENT

A message is displayed that the job has been reopened, and processing status is output to the client.

Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he doesn’t want to continue with the export job.

Data Pump Import Utility

Objects exported by Data Pump Export Utility can be imported into a database using Data Pump Import Utility. The following describes how to use Data Pump Import utility to import objects

Page 72: Oracle 10g DBA

Importing Full Dump File

If you want to Import all the objects in a dump file then you can type the following command.

$impdp hr/hr DUMPFILE=dpump_dir1:expfull.dmp FULL=y

LOGFILE=dpump_dir2:full_imp.log

 

This example imports everything from the expfull.dmp dump file. In this example, a DIRECTORY parameter is not provided. Therefore, a directory object must be provided on both theDUMPFILE parameter and the LOGFILE parameter

Importing Objects of One Schema to another Schema

The following example loads all tables belonging to hr schema to scott schema

 

$impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp

REMAP_SCHEMA=hr:scott

 

If SCOTT account exist in the database then hr objects will be loaded into scott schema. If scott account does not exist, then Import Utility will create the SCOTT account with an unusable password because, the dump file was exported by the user SYSTEM and imported by the user SYSTEM who has DBA privileges.

Loading Objects of one Tablespace to another Tablespace.

You can use remap_tablespace option to import objects of one tablespace to another tablespace by giving the command

$impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp

REMAP_TABLESPACE=users:sales

Page 73: Oracle 10g DBA

 

The above example loads tables, stored in users tablespace, in the sales tablespace.

Generating SQL File containing DDL commands using Data Pump Import

 

You can generate SQL file which contains all the DDL commands which Import would have executed if you actually run Import utility

The following is an example of using the SQLFILE parameter.

$ impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp

SQLFILE=dpump_dir2:expfull.sql

A SQL file named expfull.sql is written to dpump_dir2.

Importing objects of only a Particular Schema

If you have the IMP_FULL_DATABASE role, you can use this parameter to perform a schema-mode import by specifying a single schema other than your own or a list of schemas to import. First, the schemas themselves are created (if they do not already exist), including system and role grants, password history, and so on. Then all objects contained within the schemas are imported. Nonprivileged users can specify only their own schemas. In that case, no information about the schema definition is imported, only the objects contained within it.

Example

The following is an example of using the SCHEMAS parameter. You can create the expdat.dmp file used in this example by running the example provided for the Export SCHEMASparameter.

$impdp hr/hr SCHEMAS=hr,oe DIRECTORY=dpump_dir1 LOGFILE=schemas.log

DUMPFILE=expdat.dmp

 

Page 74: Oracle 10g DBA

The hr and oe schemas are imported from the expdat.dmp file. The log file, schemas.log, is written to dpump_dir1

Importing Only Particular Tables

The following example shows a simple use of the TABLES parameter to import only the employees and jobs tables from the expfull.dmp file. You can create theexpfull.dmp dump file used in this example by running the example provided for the Full Database Export in Previous Topic.

$impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees,jobs

This will import only employees and jobs tables from the DUMPFILE.

Running Import Utility in Interactive Mode

Similar to the DATA PUMP EXPORT utility the Data Pump Import Jobs can also be suspended, resumed or killed. And, you can attach to an already existing import job from any client machine.

For Example, suppose a DBA starts a importing by typing the following command at one client machine CLNT1 by typing the following command

$impdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir        DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob

After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter into interactive mode. Then he will get the Import> prompt where he can type interactive commands

Now he wants to stop this export job so he will type the following command

Import> STOP_JOB=IMMEDIATEAre you sure you wish to stop this job ([y]/n): y

The job is placed in a stopped state and exits the client.

Page 75: Oracle 10g DBA

After finishing his other work, the DBA wants to resume the export job and the client machine from where he actually started the job is locked because, the user has locked his/her cabin. So now the DBA will go to another client machine and he reattach to the job by typing the following command

$impdp hr/hr@mydb ATTACH=myfulljob

After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.

Import> CONTINUE_CLIENT

A message is displayed that the job has been reopened, and processing status is output to the client.

Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he doesn’t want to continue with the import job.

Flash Back Features

From Oracle Ver. 9i Oracle has introduced Flashback Query feature.  It is useful to recover from accidental statement failures. For example, suppose a user accidently deletes rows from a table and commits it also then, using flash back query he can get back the rows.

Flashback feature depends upon on how much undo retention time you have specified. If you have set the UNDO_RETENTION parameter to 2 hours then, Oracle will not overwrite the data in undo tablespace even after committing until 2 Hours have passed. Users can recover from their mistakes made since last 2 hours only.

For example, suppose John gives a delete statement at 10 AM and commits it. After 1 hour he realizes that delete statement is mistakenly performed. Now he can give a flashback  AS.. OF query to get back the deleted rows like this.

Page 76: Oracle 10g DBA

Flashback Query

SQL>select * from emp as of timestamp sysdate-1/24;

Or

SQL> SELECT * FROM emp AS OF TIMESTAMP       TO_TIMESTAMP('2007-06-07 10:00:00', 'YYYY-MM-DD HH:MI:SS')

To insert the accidently deleted rows again in the table he can type

SQL> insert into emp (select * from emp as of timestamp sysdate-1/24)

Using Flashback Version Query

You use a Flashback Version Query to retrieve the different versions of specific rows that existed during a given time interval. A new row version is created whenever a COMMIT statement is executed.

The Flashback Version Query returns a table with a row for each version of the row that existed at any time during the time interval you specify. Each row in the table includespseudocolumns of metadata about the row version. The pseudocolumns available are

VERSIONS_XID                  :Identifier of the transaction that created the row versionVERSIONS_OPERATION                :Operation Performed. I for Insert, U for Update, D for DeleteVERSIONS_STARTSCN    :Starting System Change Number when the row version was createdVERSIONS_STARTTIME  :Starting System Change Time when the row version was createdVERSIONS_ENDSCN        :SCN when the row version expired.VERSIONS_ENDTIME      :Timestamp when the row version expired

To understand let’s see the following example

Before Starting this example let’s us collect the Timestamp

Page 77: Oracle 10g DBA

SQL> select to_char(SYSTIMESTAMP,’YYYY-MM-DD HH:MI:SS’) from dual;

TO_CHAR(SYSTIMESTAMP,’YYYYY---------------------------2007-06-19 20:30:43

Suppose a user creates a emp table and inserts a row into it and commits the row.

SQL> Create table emp (empno number(5),name varchar2(20),sal                                   number(10,2));

SQL> insert into emp values (101,’Sami’,5000);SQL>commit;

At this time emp table has one version of one row.

Now a user sitting at another machine erroneously changes the Salary from 5000 to 2000 using Update statement

SQL> update emp set sal=sal-3000 where empno=101;SQL> commit;

Subsequently, a  new transaction updates the name of the employee from Sami to Smith.

SQL>update emp set name=’Smith’ where empno=101;SQL> commit;

At this point, the DBA detects the application error and needs to diagnose the problem. The DBA issues the following query to retrieve versions of the rows in the emp table that correspond to empno 101. The query uses Flashback Version Query pseudocolumns

SQL> Connect / as sysdbaSQL> column versions_starttime format a16SQL> column versions_endtime format a16SQL> set linesize 120;

Page 78: Oracle 10g DBA

SQL> select versions_xid,versions_starttime,versions_endtime,   versions_operation,empno,name,sal from emp versions between   timestamp to_timestamp(‘2007-06-19 20:30:00’,’yyyy-mm-dd hh:mi:ss’)     and to_timestamp(‘2007-06-19 21:00:00’,’yyyy-mm-dd hh:mi:ss’);

VERSION_XID  V STARTSCN  ENDSCN  EMPNO NAME     SAL-----------  - --------  ------  ----- -------- ----0200100020D  U 11323            101   SMITH     2000       02001003C02  U 11345             101   SAMI      20000002302C03A  I 12320             101   SAMI      5000

 

The Output should be read from bottom to top, from the output we can see that an Insert has taken place and then  erroneous update has taken place and then again update has taken place to change the name.

The DBA identifies the transaction 02001003C02 as erroneous and issues the following query to get the SQL command to undo the change

SQL> select operation,logon_user,undo_sql       from flashback_transaction_query            where xid=HEXTORAW(’02001003C02’);

OPERATION  LOGON_USER UNDO_SQL---------  ---------- ---------------------------------------

U           SCOTT       update emp set sal=5000 where ROWID =  

Page 79: Oracle 10g DBA

                                                   'AAAKD2AABAAAJ29AAA'

 

Now DBA can execute the command to undo the changes made by the user

SQL> update emp set sal=5000 where ROWID ='AAAKD2AABAAAJ29AAA'

1 row updated

Using Flashback Table to return Table to Past States.

Oracle Flashback Table provides the DBA the ability to recover a table or set of tables to a specified point in time in the past very quickly, easily, and without taking any part of the database offline. In many cases, Flashback Table eliminates the need to perform more complicated point-in-time recovery operations.

Flashback Table uses information in the undo tablespace to restore the table. Therefore, UNDO_RETENTION parameter is significant in Flashing Back Tables to a past state. You can only flash back tables up to the retention time you specified.

Row movement must be enabled on the table for which you are issuing the FLASHBACK TABLE statement. You can enable row movement with the following SQL statement:

ALTER TABLE table ENABLE ROW MOVEMENT;

The following example performs a FLASHBACK TABLE operation the table emp

 

FLASHBACK TABLE emp TO TIMESTAMP

        TO_TIMESTAMP('2007-06-19 09:30:00', `YYYY-MM-DD HH24:MI:SS');

The emp table is restored to its state when the database was at the time specified by the timestamp.

Page 80: Oracle 10g DBA

 

Example:-At 17:00 an HR administrator discovers that an employee "JOHN" is missing from the EMPLOYEE table. This employee was present at 14:00, the last time she ran a report. Someone accidentally deleted the record for "JOHN" between 14:00 and the present time. She uses Flashback Table to return the table to its state at 14:00, as shown in this example:

FLASHBACK TABLE EMPLOYEES TO TIMESTAMP

         TO_TIMESTAMP('2007-06-21 14:00:00','YYYY-MM-DD HH:MI:SS')

         ENABLE TRIGGERS;

 

You have to give ENABLE TRIGGERS option otherwise, by default all database triggers on the table will be disabled.

 

Recovering Drop Tables (Undo Drop Table)In Oracle Ver. 10g  Oracle introduced the concept of Recycle Bin i.e. whatever tables you drop the database does not immediately remove the space used by table. Instead, the table is renamed and placed in Recycle Bin. The FLASHBACK TABLE…BEFORE DROP command will restore the table.

This feature is not dependent on UNDO TABLESPACE so UNDO_RETENTION parameter has no impact on this feature.

For Example, suppose a user accidently drops emp table

SQL>drop table emp;

Table Dropped

Now for user it appears that table is dropped but it is actually renamed and placed in Recycle Bin. To recover this dropped table a user can type the command

Page 81: Oracle 10g DBA

SQL> Flashback table emp to before drop;

You can also restore the dropped table by giving it a different name like this

SQL> Flashback table emp to before drop rename to emp2;

Purging Objects from Recycle Bin

If you want to recover the space used by a dropped table give the following command

SQL> purge table emp;

If you want  to purge objects of  logon user  give the following command

SQL> purge recycle bin;

If you want to recover space for dropped object of a particular tablespace give the command

SQL> purge tablespace hr;

You can also purge only objects from a tablespace belonging to a specific user, using the following form of the command:

SQL>PURGE TABLESPACE hr USER scott;

If you have the SYSDBA privilege, then you can purge all objects from the recycle bin, regardless of which user owns the objects, using this command:

SQL>PURGE DBA_RECYCLEBIN;

 

 

To view the contents of Recycle Bin give the following command

SQL> show recycle bin;

Page 82: Oracle 10g DBA

Permanently Dropping TablesIf you want to permanently drop tables without putting it into Recycle Bin drop tables with purge command like this

SQL> drop table emp purge;

This will drop the table permanently and it cannot be restored.

Flashback Drop of Multiple Objects With the Same Original Name

You can create, and then drop, several objects with the same original name, and they will all be stored in the recycle bin. For example, consider these SQL statements:

CREATE TABLE EMP ( ...columns ); # EMP version 1

DROP TABLE EMP;

CREATE TABLE EMP ( ...columns ); # EMP version 2

DROP TABLE EMP;

CREATE TABLE EMP ( ...columns ); # EMP version 3

DROP TABLE EMP;

 

In such a case, each table EMP is assigned a unique name in the recycle bin when it is dropped. You can use a FLASHBACK TABLE... TO BEFORE DROP statement with the original name of the table, as shown in this example:

FLASHBACK TABLE EMP TO BEFORE DROP;

 

The most recently dropped table with that original name is retrieved from the recycle bin, with its original name. You can retrieve it and assign it a new name using a RENAME TO clause. The following example shows the retrieval from the recycle bin of all three dropped EMP tables from the previous example, with each assigned a new name:

FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_3;

FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_2;

Page 83: Oracle 10g DBA

FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_1;

Important Points:1.             There is no guarantee that objects will remain in Recycle Bin.

Oracle might empty recycle bin whenever Space Pressure occurs i.e. whenever tablespace becomes full and transaction requires new extents then, oracle will delete objects from recycle bin

2.             A table and all of its dependent objects (indexes, LOB segments, nested tables, triggers, constraints and so on) go into the recycle bin together, when you drop the table. Likewise, when you perform Flashback Drop, the objects are generally all retrieved together.

3. There is no fixed amount of space allocated to the recycle bin, and no guarantee as to how long dropped objects remain in the recycle bin. Depending upon system activity, a dropped object may remain in the recycle bin for seconds, or for months.

 

Flashback Database: Alternative to Point-In-Time Recovery

Oracle Flashback Database, lets you quickly recover the entire database from logical data corruptions or user errors.

To enable Flashback Database, you set up a flash recovery area, and set a flashback retention target, to specify how far back into the past you want to be able to restore your database with Flashback Database.

Once you set these parameters, From that time on, at regular intervals, the database copies images of each altered block in every datafile into flashback logs stored in the flash recovery area. These Flashback logs are use to flashback database to a point in time.

Enabling Flash Back Database

Step 1. Shutdown the database if it is already running and set the following parameters

           DB_RECOVERY_FILE_DEST=/d01/ica/flasharea           DB_RECOVERY_FILE_DEST_SIZE=10G

Page 84: Oracle 10g DBA

           DB_FLASHBACK_RETENTION_TARGET=4320 

(Note: the db_flashback_retention_target is specified in minutes here we have specified  3 days i.e. 3x24x60=4320)

Step 2. Start the instance and mount the Database.

            SQL>startup mount;

Step 3. Now enable the flashback database by giving the following command

            SQL>alter database flashback on;

Now Oracle start writing Flashback logs to recovery area.

To how much size we should set the flash recovery area.

After you have enabled the Flashback Database feature and allowed the database to generate some flashback logs, run the following query:

SQL> SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;

 

This will show how much size the recovery area should be set to.

 How far you can flashback database.

To determine the earliest SCN and earliest Time you can Flashback your database,  give the following query:

SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME

         FROM V$FLASHBACK_DATABASE_LOG;

 

 

Example: Flashing Back Database to a point in time

 

Page 85: Oracle 10g DBA

Suppose, a user erroneously drops a schema at 10:00AM. You as a DBA came to know of this at 5PM. Now since you have configured the flashback area and set up the flashback retention time to 3 Days, you can flashback the database to 9:50AM by following the given procedure

 1.             Start RMAN $rman target / 

2.             Run the FLASHBACK DATABASE command to return the database to  9:59AM by typing the following commandRMAN> FLASHBACK DATABASE TO TIME timestamp('2007-06-21 09:59:00');

 or, you can also type this command. RMAN> FLASHBACK DATABASE TO TIME (SYSDATE-8/24); 

3. When the Flashback Database operation completes, you can evaluate the results by opening the database read-only and run some queries to check whether your Flashback Database has returned the database to the desired state.

           RMAN> SQL 'ALTER DATABASE OPEN READ ONLY';

 

               At this time, you have several options

              

               Option 1:-

    If you are content with your result you can open the database by performing ALTER    DATABASE OPEN RESETLOGS

 

       SQL>ALTER DATABASE OPEN RESETLOGS;

Page 86: Oracle 10g DBA

                 Option 2:-                If you discover that you have chosen the wrong target time for

your Flashback Database operation, you can use RECOVER DATABASE UNTIL to bring the database forward, or perform FLASHBACK DATABASE again with an SCN further in the past. You can completely undo the effects of your flashback operation by performing complete recovery of the database:

        RMAN> RECOVER DATABASE;

 

               Option 3:-                If you only want to retrieve some lost data from the past time,

you can open the database read-only, then perform a logical export of the data using an Oracle export utility, then run RECOVER DATABASE to return the database to the present time and re-import the data using the Oracle import utility

 4.             Since in our example only a schema is dropped and the rest of

database is good, third option is relevant for us. Now, come out of  RMAN and run EXPORT utility to export the whole schema $exp  userid=system/manager file=scott.dmp  owner=SCOTT

 5.             Now Start RMAN and recover database to the present time

 $rman target /RMAN> RECOVER DATABASE;

 

Page 87: Oracle 10g DBA

6.             After database is recovered shutdown and restart the database in normal mode and import the schema by running IMPORT utility $imp userid=system/manager file=scott.dmp

Log Miner

Using Log Miner utility, you can query the contents of online redo log files and archived log files. Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.

LogMiner Configuration

There are three basic objects in a LogMiner configuration that you should be familiar with: the source database,  the LogMiner dictionary, and the redo log files containing the data of interest:

The source database is the database that produces all the redo log files that you want LogMiner to analyze.

The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request.

LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data.

For example, consider the following the SQL statement:

INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY)  VALUES('IT_WT','Technical Writer', 4000, 11000);

 

Without the dictionary, LogMiner will display:

Page 88: Oracle 10g DBA

insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values

(HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),

HEXTORAW('c229'),HEXTORAW('c3020b'));

 

                The redo log files contain the changes made to the database or database dictionary.

 

LogMiner Dictionary Options

LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for supplying the dictionary:

Using the Online Catalog

Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.

Extracting a LogMiner Dictionary to the Redo Log Files

Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.

Extracting the LogMiner Dictionary to a Flat File

This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.

Using the Online Catalog

Page 89: Oracle 10g DBA

To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(-

       OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Extracting a LogMiner Dictionary to the Redo Log Files

To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file is not).

To extract dictionary information to the redo log files, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location.

SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

Extracting the LogMiner Dictionary to a Flat File

When the LogMiner dictionary is in a flat file, fewer system resources are used than when it is contained in the redo log files. Oracle recommends that you regularly back up the dictionary extract to ensure correct analysis of older redo log files.

1. Set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file. For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, enter the following in the initialization parameter file:

UTL_FILE_DIR = /oracle/database

 

2. Start the Database

Page 90: Oracle 10g DBA

SQL> startup

3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file dictionary.ora in /oracle/database:

SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora','/oracle/database/',

                       DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

Redo Log File Options

To mine data in the redo log files, LogMiner needs information about which redo log files to mine.

You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows:

Automatically

If LogMiner is being used on the source database, then you can direct LogMiner to find and create a list of redo log files for analysis automatically. Use the CONTINUOUS_MINE option when you start LogMiner.

Manually

Use the DBMS_LOGMNR.ADD_LOGFILE procedure to manually create a list of redo log files before you start LogMiner. After the first redo log file has been added to the list, each subsequently added redo log file must be from the same database and associated with the same database RESETLOGS SCN. When using this method, LogMiner need not be connected to the source database.

Example: Finding All Modifications in the Current Redo Log File

The easiest way to examine the modification history of a database is to mine at the source database and use the online catalog to translate the redo log files. This example shows how to do the simplest analysis using LogMiner.

Page 91: Oracle 10g DBA

Step 1 Specify the list of redo log files to be analyzed.

Specify the redo log files which you want to analyze.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -

       LOGFILENAME => '/usr/oracle/ica/log1.ora',

       OPTIONS => DBMS_LOGMNR.NEW);

 

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -

       LOGFILENAME => '/u01/oracle/ica/log2.ora',

       OPTIONS => DBMS_LOGMNR.ADDFILE);

Step 2 Start LogMiner.

Start LogMiner and specify the dictionary to use.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(

       OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Step 3 Query the V$LOGMNR_CONTENTS view.

Note that there are four transactions (two of them were committed within the redo log file being analyzed, and two were not). The output shows the DML statements in the order in which they were executed; thus transactions interleave among themselves.

SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' ||  XIDSQN) AS XID,SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');

 

USR    XID          SQL_REDO                        SQL_UNDO----   ---------  ----------------------------------------------------HR     1.11.1476  set transaction read write; HR     1.11.1476  insert into "HR"."EMPLOYEES"(     delete from "HR"."EMPLOYEES"                  "EMPLOYEE_ID","FIRST_NAME",       where "EMPLOYEE_ID" = '306'                  "LAST_NAME","EMAIL",              and "FIRST_NAME" = 'Mohammed'

Page 92: Oracle 10g DBA

                  "PHONE_NUMBER","HIRE_DATE",       and "LAST_NAME" = 'Sami'                  "JOB_ID","SALARY",                and "EMAIL" = 'MDSAMI'                  "COMMISSION_PCT","MANAGER_ID",    and "PHONE_NUMBER" = '1234567890'                  "DEPARTMENT_ID") values           and "HIRE_DATE" = TO_DATE('10-JAN-2003                  ('306','Mohammed','Sami',         13:34:43', 'dd-mon-yyyy hh24:mi:ss')                  'MDSAMI', '1234567890',           and "JOB_ID" = 'HR_REP' and                  TO_DATE('10-jan-2003 13:34:43',   "SALARY" = '120000' and                  'dd-mon-yyyy hh24:mi:ss'),         "COMMISSION_PCT" = '.05' and                  'HR_REP','120000', '.05',         "DEPARTMENT_ID" = '10' and                  '105','10');                      ROWID = 'AAAHSkAABAAAY6rAAO';    OE     1.1.1484   set transaction read write; OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"  update "OE"."PRODUCT_INFORMATION"                  set "WARRANTY_PERIOD" =            set "WARRANTY_PERIOD" =                  TO_YMINTERVAL('+05-00') where      TO_YMINTERVAL('+01-00') where                  "PRODUCT_ID" = '1799' and          "PRODUCT_ID" = '1799' and                  "WARRANTY_PERIOD" =                "WARRANTY_PERIOD" =                  TO_YMINTERVAL('+01-00') and        TO_YMINTERVAL('+05-00') and                  ROWID = 'AAAHTKAABAAAY9mAAB';      ROWID = 'AAAHTKAABAAAY9mAAB';                                                                               OE     1.1.1484   update "OE"."PRODUCT_INFORMATION"  update "OE"."PRODUCT_INFORMATION"                  set "WARRANTY_PERIOD" =            set "WARRANTY_PERIOD" =                  TO_YMINTERVAL('+05-00') where      TO_YMINTERVAL('+01-00') where                  "PRODUCT_ID" = '1801' and          "PRODUCT_ID" = '1801' and                  "WARRANTY_PERIOD" =                "WARRANTY_PERIOD" =                  TO_YMINTERVAL('+01-00') and        TO_YMINTERVAL('+05-00') and                  ROWID = 'AAAHTKAABAAAY9mAAC';      ROWID ='AAAHTKAABAAAY9mAAC'; HR     1.11.1476  insert into "HR"."EMPLOYEES"(     delete from "HR"."EMPLOYEES"                  "EMPLOYEE_ID","FIRST_NAME",       "EMPLOYEE_ID" = '307' and                  "LAST_NAME","EMAIL",              "FIRST_NAME" = 'John' and                  "PHONE_NUMBER","HIRE_DATE",       "LAST_NAME" = 'Silver' and                  "JOB_ID","SALARY",                "EMAIL" = 'JSILVER' and                  "COMMISSION_PCT","MANAGER_ID",    "PHONE_NUMBER" = '5551112222'

Page 93: Oracle 10g DBA

                  "DEPARTMENT_ID") values           and "HIRE_DATE" = TO_DATE('10-jan-                                                    2003                  ('307','John','Silver',           13:41:03', 'dd-mon-yyyy hh24:mi:ss')                   'JSILVER', '5551112222',         and "JOB_ID" ='105' and                                                    "DEPARTMENT_ID"                  TO_DATE('10-jan-2003 13:41:03',   = '50' and ROWID =                                                    'AAAHSkAABAAAY6rAAP';                  'dd-mon-yyyy hh24:mi:ss'),                  'SH_CLERK','110000', '.05',                  '105','50');                OE     1.1.1484   commit; HR     1.15.1481   set transaction read write; HR     1.15.1481  delete from "HR"."EMPLOYEES"      insert into "HR"."EMPLOYEES"(                  where "EMPLOYEE_ID" = '205' and   "EMPLOYEE_ID","FIRST_NAME",                  "FIRST_NAME" = 'Shelley' and      "LAST_NAME","EMAIL","PHONE_NUMBER",                  "LAST_NAME" = 'Higgins' and       "HIRE_DATE", "JOB_ID","SALARY",                  "EMAIL" = 'SHIGGINS' and          "COMMISSION_PCT","MANAGER_ID",                  "PHONE_NUMBER" = '            515.123.8080      '   "DEPARTMENT_ID") values                  and "HIRE_DATE" = TO_DATE(        ('205','Shelley','Higgins',                  '07-jun-1994 10:05:01',           and     'SHIGGINS','            515.123.8080      ',                  'dd-mon-yyyy hh24:mi:ss')         TO_DATE('07-jun-1994 10:05:01',                  and "JOB_ID" = 'AC_MGR'           'dd-mon-yyyy hh24:mi:ss'),                  and "SALARY"= '12000'            'AC_MGR','12000',NULL,'101','110');                  and "COMMISSION_PCT" IS NULL                  and "MANAGER_ID"                  = '101' and "DEPARTMENT_ID" =                  '110' and ROWID =                  'AAAHSkAABAAAY6rAAM';  OE     1.8.1484   set transaction read write; OE     1.8.1484   update "OE"."PRODUCT_INFORMATION"  update "OE"."PRODUCT_INFORMATION"                  set "WARRANTY_PERIOD" =            set "WARRANTY_PERIOD" =                  TO_YMINTERVAL('+12-06') where      TO_YMINTERVAL('+20-00') where                  "PRODUCT_ID" = '2350' and          "PRODUCT_ID" = '2350' and                  "WARRANTY_PERIOD" =                "WARRANTY_PERIOD" =                  TO_YMINTERVAL('+20-00') and        TO_YMINTERVAL('+20-00') and                  ROWID = 'AAAHTKAABAAAY9tAAD';       ROWID ='AAAHTKAABAAAY9tAAD'; 

Page 94: Oracle 10g DBA

HR     1.11.1476  commit;

Step 4 End the LogMiner session.

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example of Mining Without Specifying the List of Redo Log Files Explicitly

The previous example explicitly specified the redo log file or files to be mined. However, if you are mining in the same database that generated the redo log files, then you can mine the appropriate list of redo log files by just specifying the time (or SCN) range of interest. To mine a set of redo log files without explicitly specifying them, use theDBMS_LOGMNR.CONTINUOUS_MINE option to the DBMS_LOGMNR.START_LOGMNR procedure, and specify either a time range or an SCN range of interest.

Example : Mining Redo Log Files in a Given Time Range

This example assumes that you want to use the data dictionary extracted to the redo log files.

Step 1 Determine the timestamp of the redo log file that contains the start of the data dictionary.

SQL> SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG

WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG

       WHERE DICTIONARY_BEGIN = 'YES');

 

 

NAME                                          FIRST_TIME

--------------------------------------------  --------------------

/usr/oracle/data/db1arch_1_207_482701534.dbf  10-jan-2003 12:01:34

 

Page 95: Oracle 10g DBA

Step 2 Display all the redo log files that have been generated so far.

This step is not required, but is included to demonstrate that the CONTINUOUS_MINE option works as expected, as will be shown in Step 4.

SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS

       WHERE LOW_TIME > '10-jan-2003 12:01:34';

 

NAME

----------------------------------------------

/usr/oracle/data/db1arch_1_207_482701534.dbf

/usr/oracle/data/db1arch_1_208_482701534.dbf

/usr/oracle/data/db1arch_1_209_482701534.dbf

/usr/oracle/data/db1arch_1_210_482701534.dbf

Step 3 Start LogMiner.

Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY, PRINT_PRETTY_SQL, and CONTINUOUS_MINE options.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(-

       STARTTIME => '10-jan-2003 12:01:34', -

         ENDTIME => SYSDATE, -

                OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + -

                    DBMS_LOGMNR.COMMITTED_DATA_ONLY + -

                    DBMS_LOGMNR.PRINT_PRETTY_SQL + -

                    DBMS_LOGMNR.CONTINUOUS_MINE);

Step 4 Query the V$LOGMNR_LOGS view.

Page 96: Oracle 10g DBA

This step shows that the DBMS_LOGMNR.START_LOGMNR procedure with the CONTINUOUS_MINE option includes all of the redo log files that have been generated so far, as expected. (Compare the output in this step to the output in Step 2.)

SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS;

 

NAME

------------------------------------------------------

/usr/oracle/data/db1arch_1_207_482701534.dbf

/usr/oracle/data/db1arch_1_208_482701534.dbf

/usr/oracle/data/db1arch_1_209_482701534.dbf

/usr/oracle/data/db1arch_1_210_482701534.dbf

 

Step 5 Query the V$LOGMNR_CONTENTS view.

To reduce the number of rows returned by the query, exclude all DML statements done in the sys or system schema. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.)

Note that all reconstructed SQL statements returned by the query are correctly translated.

SQL> SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID,

       SQL_REDO FROM V$LOGMNR_CONTENTS

       WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND

       TIMESTAMP > '10-jan-2003 15:59:53';

 

USR             XID         SQL_REDO

Page 97: Oracle 10g DBA

-----------     --------    -----------------------------------

SYS             1.2.1594    set transaction read write;

SYS             1.2.1594    create table oe.product_tracking (product_id number not null,

                            modified_time date,

                            old_list_price number(8,2),

                            old_warranty_period interval year(2) to month);

SYS             1.2.1594    commit;

 

SYS             1.18.1602   set transaction read write;

SYS             1.18.1602   create or replace trigger oe.product_tracking_trigger

                            before update on oe.product_information

                            for each row

                            when (new.list_price <> old.list_price or

               new.warranty_period <> old.warranty_period)

                            declare

                            begin

                            insert into oe.product_tracking values

                               (:old.product_id, sysdate,

                                :old.list_price, :old.warranty_period);

                            end;

SYS             1.18.1602   commit;

 

OE              1.9.1598    update "OE"."PRODUCT_INFORMATION"

                              set

                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),

                                "LIST_PRICE" = 100

Page 98: Oracle 10g DBA

                              where

                                "PRODUCT_ID" = 1729 and

                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and

                                "LIST_PRICE" = 80 and

                                ROWID = 'AAAHTKAABAAAY9yAAA';

OE              1.9.1598    insert into "OE"."PRODUCT_TRACKING"

                              values

                                "PRODUCT_ID" = 1729,

                                "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:03',

                                'dd-mon-yyyy hh24:mi:ss'),

                                "OLD_LIST_PRICE" = 80,

                                "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');

 

OE              1.9.1598    update "OE"."PRODUCT_INFORMATION"

                              set

                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),

                                "LIST_PRICE" = 92

                              where

                                "PRODUCT_ID" = 2340 and

                                "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and

                                "LIST_PRICE" = 72 and

                                ROWID = 'AAAHTKAABAAAY9zAAA';

 

OE              1.9.1598    insert into "OE"."PRODUCT_TRACKING"

                              values

Page 99: Oracle 10g DBA

                                "PRODUCT_ID" = 2340,

                                "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:07',

                                'dd-mon-yyyy hh24:mi:ss'),

                                "OLD_LIST_PRICE" = 72,

                                "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');

 

OE              1.9.1598     commit;

Step 6 End the LogMiner session.

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

BACKUP AND RECOVERY

Opening or Bringing the database in Archivelog mode.

To open the database in Archive log mode. Follow these steps:

STEP 1: Shutdown the database if it is running.

STEP 2: Take a full offline backup.

STEP 3: Set the following parameters in parameter file.

LOG_ARCHIVE_FORMAT=ica%s.%t.%r.arc

LOG_ARCHIVE_DEST_1=”location=/u02/ica/arc1”

If you want you can specify second destination also

LOG_ARCHIVE_DEST_2=”location=/u02/ica/arc1”

Step 3: Start and mount the database.

SQL> STARTUP MOUNT

STEP 4: Give the following command

Page 100: Oracle 10g DBA

SQL> ALTER DATABASE ARCHIVELOG;

STEP 5: Then type the following to confirm.

SQL> ARCHIVE LOG LIST;

STEP 6: Now open the database

SQL>alter database open;

Step 7: It is recommended that you take a full backup after you brought the database in archive log mode.

To again bring back the database in NOARCHIVELOG mode. Follow these steps:

STEP 1: Shutdown the database if it is running.

STEP 2: Comment the following parameters in parameter file by putting " # " .

 # LOG_ARCHIVE_DEST_1=”location=/u02/ica/arc1”

# LOG_ARCHIVE_DEST_2=”location=/u02/ica/arc2”

# LOG_ARCHIVE_FORMAT=ica%s.%t.%r.arc

 STEP 3: Startup and mount the database.

 SQL> STARTUP MOUNT;

STEP 4: Give the following Commands

SQL> ALTER DATABASE NOARCHIVELOG;

STEP 5: Shutdown the database and take full offline backup.

TAKING OFFLINE BACKUPS. ( UNIX )

Shutdown the database if it is running. Then start SQL Plus and connect as SYSDBA.

$sqlplus

SQL> connect / as sysdba

SQL> Shutdown immediate

Page 101: Oracle 10g DBA

SQL> Exit

After Shutting down the database. Copy all the datafiles, logfiles, controlfiles, parameter file and password file to your backup destination.

TIP:

To identify the datafiles, Logfiles query the data dictionary tables V$DATAFILE and V$LOGFILE before shutting down.

Lets suppose all the files are in "/u01/ica" directory. Then the following command

copies all the files to the backup destination /u02/backup.

$cd /u01/ica

$cp * /u02/backup/

Be sure to remember the destination of each file. This will be useful when restoring from this backup. You can create text file and put the destinations of each file for future use. Now you can open the database.

TAKING ONLINE (HOT) BACKUPS.(UNIX)

To take online backups the database should be running in Archivelog mode. To check whether the database is running in  Archivelog mode or Noarchivelog mode. Start sqlplus and then connect as SYSDBA.

After connecting give the command "archive log list" this will show you the status of archiving.

$sqlplus

Enter User:/ as sysdba

SQL> ARCHIVE LOG LIST

If the database is running in archive log mode then you can take online backups.

Let us suppose we want to take online backup of  "USERS" tablespace. You can query the V$DATAFILE view to find out the name of datafiles associated with this tablespace. Lets suppose the file is 

"/u01/ica/usr1.dbf ".

Give the following series of commands to take online backup of USERS tablespace.

Page 102: Oracle 10g DBA

$sqlplus

Enter User:/ as sysdba

SQL> alter tablespace users begin backup;

SQL> host cp /u01/ica/usr1.dbf   /u02/backup

SQL> alter tablespace users end backup;

SQL> exit;

RECOVERING THE DATABASE IF IT IS RUNNING IN NOARCHIVELOG MODE.

Option 1: When you don’t have a backup.

If you have lost one datafile and if you don't have any backup and if the datafile does not contain important objects then, you can drop the damaged datafile and open the database. You will loose all information contained in the damaged datafile.

The following are the steps to drop a damaged datafile and open the database.

(UNIX)

STEP 1: First take full backup of database for safety.

STEP 2: Start the sqlplus and give the following commands.

$sqlplus

Enter User:/ as sysdba

SQL> STARTUP MOUNT

SQL> ALTER DATABASE DATAFILE  '/u01/ica/usr1.dbf '  offline drop;

SQL>alter database open;

Option 2:When you have the Backup.

If the database is running in Noarchivelog mode and if you have a full backup. Then there are two options for you.

Page 103: Oracle 10g DBA

i . Either you can drop the damaged datafile, if it does not contain important information which you can     afford to loose.

ii . Or you can restore from full backup. You will loose all the changes made to the database since last full           backup.

To drop the damaged datafile follow the steps shown previously.

To restore from full database backup. Do the following.

STEP 1: Take a full backup of current database.

STEP 2: Restore from full database backup i.e. copy all the files from backup to their original locations.

(UNIX)

Suppose the backup is in  "/u2/oracle/backup" directory. Then do the following.

$cp /u02/backup/*  /u01/ica

This will copy all the files from backup directory to original destination. Also remember to copy the control files to all the mirrored locations.

RECOVERING FROM LOST OF CONTROL FILE.

If you have lost the control file and if it is mirrored. Then simply copy the control file from mirrored location to the damaged location and open the database

If you have lost all the mirrored control files and all the datafiles and logfiles are intact. Then you can recreate a control file.

If you have already taken the backup of control file creation statement by giving this command. " ALTER DATABASE BACKUP CONTROLFILE TO TRACE; " and if you have not added any tablespace since then, just create the controlfile by executing the statement

Buf If you have added any new tablespace after generating create controlfile statement. Then you have to alter the script and include the filename and size of the file in script file.

If your script file containing the control file creation statement is "CR.SQL"

Page 104: Oracle 10g DBA

Then just do the following.

STEP 1: Start sqlplus

STEP 2: connect / as sysdba

STEP 3: Start and do not mount a database like this.

SQL> STARTUP NOMOUNT

STEP 4: Run the "CR.SQL" script file.

STEP 5: Mount and Open the database.

SQL>alter database mount;

SQL>alter database open;

If you do not have a backup of Control file creation statement. Then you have to manually give the CREATE CONTROL FILE statement. You have to write the file names and sizes of all the datafiles. You will lose any datafiles which you do not include.

Refer to "Managing Control File" topic for the CREATE CONTROL FILE statement.

Recovering Database when the database is running in ARCHIVELOG Mode.

Recovering from the lost of Damaged Datafile.

If you have lost one datafile. Then follow the steps shown below.

STEP 1. Shutdown the Database if it is running.

STEP 2. Restore the datafile from most recent backup.

STEP 3. Then Start sqlplus and connect as SYSDBA.

$sqlplus

Enter User:/ as sysdba

SQL>Startup mount;

SQL>Set autorecovery on;

Page 105: Oracle 10g DBA

SQL>alter database recover;

 If all archive log files are available then recovery should go on smoothly. After you get the "Media Recovery Completely" statement. Go on to next step.

STEP 4. Now open the database

SQL>alter database open;

Recovering from the Lost Archived Files:

If you have lost the archived files. Then Immediately shutdown the database and take a full offline backup.

Time Based Recovery (INCOMPLETE RECOVERY).

Suppose a user has a dropped a crucial table accidentally and you have to recover the dropped table.

You have taken a full backup of the database on Monday 13-Aug-2007 and the table was created on Tuesday 14-Aug-2007 and thousands of rows were inserted into it. Some user accidently drop the table on Thursday 16-Aug-2007 and nobody notice this until Saturday.

Now to recover the table follow these steps.

STEP 1. Shutdown the database and take a full offline backup.

STEP 2. Restore all the datafiles, logfiles and control file from the full offline backup which was taken on Monday.

STEP 3. Start SQLPLUS and start and mount the database.

STEP 4. Then give the following command to recover database until specified time.

SQL> recover database until time '2007:08:16:13:55:00'

         using backup controlfile;

STEP 5. Open the database and reset the logs. Because you have performed a Incomplete Recovery, like this

SQL> alter database open resetlogs;

STEP 6. After database is open. Export the table to a dump file using Export Utility.

Page 106: Oracle 10g DBA

STEP 7. Restore from the full database backup which you have taken on Saturday.

STEP 8. Open the database and Import the table.

  

Note: In Oracle 10g you can easily recover drop tables by using Flashback feature. For further information please refer to Flashback Features Topic in this book.