TsDisaster recovery using Veritas Storage Foundation Enterprise HA with IBM Storwize V7000 Metro Mirror and Global Mirror: Solution installation and configuration
This paper describes how Symantec and IBM have installed, configured, and validated high availability and disaster recovery configurations for Oracle with IBM Storwize V7000 systems. To know more about the IBM Storwize V7000, visit http://ibm.co/TaLb6Q.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
About high availability .............................................................................................................................. 1
About disaster recovery ........................................................................................................................... 1
About IBM ................................................................................................................................................ 2
About Symantec ....................................................................................................................................... 2
About Veritas Storage Foundation HA ..................................................................................................... 2
About IBM Storwize V7000 system ......................................................................................................... 2
Setting up the Storwize V7000 remote copy partnership ...................................................................... 11
Creating Storwize V7000 Metro Mirror, Global Mirror, and FlashCopy consistency groups ................. 12
Creating Storwize V7000 Metro Mirror, Storwize V7000 Global Mirror relationship, and FlashCopy mappings ............................................................................................................................................... 13
Installing Veritas Storage Foundation .................................................................................. 14
Disk space ............................................................................................................................................. 15
Virtual IP address ................................................................................................................................... 16
Prerequisites for local and remote cluster installation ........................................................................... 16
Mounting a software disk ....................................................................................................................... 16
Installing SFHA 5.1 SP1PR1 using the Veritas product installers ......................................................... 17
Installing Veritas Storage Foundation HA using webinstaller interface ................................................. 20
Setting up Fire Drill ................................................................................................................................ 29
About Fire Drill resource................................................................................................................. 29
SVCCopyServicesSnap resource type definition ........................................................................... 30
Attribute definitions for the SVCCopyServicesSnap agent ............................................................ 30
This paper describes how Symantec and IBM have installed, configured, and validated high availability (HA) and disaster recovery (DR) configurations for Oracle with IBM Storwize V7000 systems. These validations include local HA configurations using Veritas Storage Foundation and Veritas Cluster Server (VCS). The configuration was extended to a DR configuration using IBM Storwize V7000 Metro Mirror for synchronous replication and Storwize V7000 Global Mirror for asynchronous replication using the VCS agent for IBM SVCCopyServices and VCS Global Cluster Option (GCO) for alternate site failover and failback capability.
Introduction
Infrastructure for mission-critical applications must be able to meet the organization's recovery time
objective (RTO) and recovery point objective (RPO) for resuming operation in the event of a site disaster.
This solution addresses environments where the RPOs and RTOs are in the range of minutes to a few
hours. While backup is the foundation for any DR plan, typical RTOs for tape-based backup are well
beyond these objectives. Also, replication alone is not enough as having the application data at a DR site
is of limited use without also having the ability to start the correct sequence of database management
systems, application servers, and business applications.
Symantec‟s DR solutions, Metro Clustering and Global Clustering, are extensions of local HA clustering
using Veritas Storage Foundation and Veritas Cluster Server. This validated and documented solution is
an example of Global Clustering, which is a collection of two or more VCS clusters at separate locations
linked together with VCS Global Cluster option to enable wide-area failover and disaster recovery. Each
local cluster within the global cluster is connected to its own shared storage. Local clustering provides
local failover for each site. IBM® Storwize® V7000 storage system Metro Mirror replicates data between
sites to maintain synchronized copies of storage at the two sites. For a disaster that affects an entire site,
the customer makes a decision on whether or not to move operations to the disaster recovery site. When
that decision to move the operations is made, the application is automatically migrated to a system at the
DR site. IBM Storwize V7000 Global Mirror replicates data asynchronously between sites and applies
recovery at the DR sites.
About high availability
The term high availability or HA refers to a state where data and applications are highly available because
software or hardware is in place to maintain the continued functioning in the event of computer failure. HA
can refer to any software or hardware that provides fault tolerance, but generally the term has become
associated with clustering. Local clustering provides HA through database and application failover.
Veritas Storage Foundation Enterprise HA (SF/HA) includes Veritas Storage Foundation and Veritas
Cluster Server and provides the capability for local clustering. The Storwize V7000 disk system includes a
wide range of HA features as well.
About disaster recovery
Wide-area disaster recovery provides the ultimate protection for data and applications in the event of a
disaster. With an appropriate disaster recovery solution in place, if a disaster affects a local or
metropolitan area, data and critical services can be failed over to a site hundreds or even thousands of
miles away. IBM Storwize V7000 Metro Mirror and Global Mirror, combined with Veritas Storage
Foundation Enterprise HA/DR provide the capability for implementing disaster recovery.
Veritas Storage Foundation and High Availability Installation Guide https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/sf_install_51sp1pr1_aix.pdf
Veritas Cluster Server
Veritas Cluster Server Installation Guide https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/vcs_install_51sp1pr1_aix.pdf Veritas Cluster Server Administrator‟s Guide https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/vcs_admin_51sp1pr1_aix.pdf
Veritas Cluster Server Agent for Oracle Installation and Configuration Guide https://sort.symantec.com/public/documents/sfha/5.1sp1pr1/aix/productguides/pdf/vcs_oracle_agent_51sp1pr1_aix.pdf Veritas Cluster Server Agent for IBM SVCCopyServices Installation and Configuration Guide
Included here are some general guidelines for configuration of the Storwize V7000 in preparation for the
Storage Foundation for High Availability installation.
Fabric zoning
For more details on zoning requirements for Storwize V7000, review the sections on Zoning details and
Zoning examples available in the Storwize V7000 Information Center at
http://publib.boulder.ibm.com/infocenter/storwize/ic/index.jsp. Table 4 shows the zones created for this
configuration.
Zone Zone configuration – Fabric 1 and 2 (SAN01, SAN03 /
SAN02, SAN04)
ISV7K4 includes aliases for two Storwize V7000 ports and a port each from ISVP12 / ISVP13
ISV7KD10 includes aliases for two Storwize V7000 ports and a port each from ISVP14 / ISVP15
ISV7KD10_ISV7K4_Mirror includes one Storwize V7000 port from each disk system to allow formation of intercluster link, for remote copy (Global Mirror and Metro Mirror)
Is the Intra-cluster delay simulation that simulates Global Mirror round trip-delay in milliseconds.
Default is 0, valid range is 0 to 100 milliseconds.
You can get the cluster alias using the following commands. IBM_2076:ISV7K4:admin> svcinfo lscluster ISV7K4 IBM_2076:ISV7K4:admin> svctask chcluster -alias 00000200A0C000A0 -name ISV7K4 -gmlinktolerance 300 -gminterdelaysimulation 20 -gmintradelaysimulation 0
Creating Storwize V7000 Metro Mirror, Global Mirror, and FlashCopy consistency
groups
You can create Metro Mirror and Global Mirror consistency groups by specifying a name and the remote
Storwize V7000 name as shown below. Make sure the two Storwize V7000 systems are up and in
communication throughout the create process. The new consistency group does not contain any
relationships and will be in an empty state. A consistency group is used to ensure that a number of
relationships are managed so that in the event of a disconnection of the relationships, the data on all
volumes within the group is in a consistent state. This can be important in a database application where
2. Place the Veritas software disk into a DVD drive connected to your system.
3. Mount the disk by determining the device access name of the DVD drive.
The format for the device access name is cdX where X is the device number. After inserting the
disk, type the following commands:
# mkdir -p /cdrom
# mount -V cdrfs -o ro /dev/cdX /cdrom
Installing SFHA 5.1 SP1PR1 using the Veritas product installers
Note: Veritas products are installed under the /opt directory on the specified host systems. Ensure that
the directory /opt exists and has write permissions for root before starting an installation procedure.
The Veritas product installer is the recommended method to license and install the product. There are
command-line-driven and web-based versions of the product installer available. Both the versions enable
you to verify preinstallation requirements, configure the product, and view the product‟s description.
You can use the product installer to install Veritas Storage Foundation and Veritas Storage Foundation
Enterprise HA. Usually, during an installation, you can type b (back) to return to a previous section of the
installation procedure. The back feature of the installation scripts is context-sensitive, so it returns to the
beginning of a grouped section of questions. If an installation procedure hangs, press Ctrl+C to stop and
exit the program. There is a short delay before the script exits.
To install a Storage Foundation product, run the following steps from one node in each cluster.
1. If the installation file sets are on a DVD media, make sure the disk is mounted. Refer to
the “Mounting a software disk” section. This installation scenario uses downloaded
compressed tarfiles that have been unpacked into the /tmp directory on one of the cluster
nodes.
2. To invoke the common installer, run the installer command on the disk as shown in this example:
# /tmp/installer (to invoke the menu-based installation interface)
or, alternatively
# /tmp/webinstaller (to invoke the web-based installation interface)
3. Enter I to install a product and press Enter to begin.
4. When the list of available products is displayed, select the product you want to install and enter the
corresponding number and press Enter. The product installation begins automatically.
5. Enter the Storage Foundation Enterprise HA/DR product license information.
Enter a product_name license key for isvp_sfha_clusterA: [?] XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X successfully registered on isvp_sfha_clusterA Do you
want to enter another license key for isvp_sfha_clusterA? [y,n,q,?] (n)
Enter a product_name license key for isvp_sfha_clusterB: [?] XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X successfully registered on isvp_sfha_clusterB
Do you want to enter another license key for isvp_sfha_clusterB? [y,n,q,?] (n)
Enter n if you have no further license keys to add for a system.
You are then prompted to enter the keys for the next system.
If the information is correct, press Enter. If the information is not correct, enter n. The installer
prompts you to enter the information again.
11. When prompted to configure the product to use Veritas Security Services, enter y or n to configure.
Note: Before configuring a cluster to operate using Veritas Security Services, another system must
already have Veritas Security Services installed and be operating as a Root Broker. Refer to the
Veritas Cluster Server Installation Guide for more information on configuring a VxSS Root Broker.
Would you like to configure product_name to use Veritas Security Services? [y,n,q]
(n) n
12. A message displays notifying you of the information required to add users. When prompted, set the user name and /or password for the administrator.
Do you want to set the username and/or password for the Admin user (default
You must install the IBM SAN Volume Controller copy services agent on each node in the cluster. In
global cluster environments, install the agent on each node in each cluster. These instructions assume
that the Veritas Cluster Server is already installed. Perform the following steps to install the agent.
1. Make sure the disk is mounted. See “Mounting a Software Disk”.
2. Run the following command to navigate to the location of the agent packages:
# cd /cdrom/aix/replication/svccopyservices_agent /version/pkgs The variable version represents the version of the agent. In this scenario, version 5.0.3.0 has been installed.
3. Run the following command to add the file sets for the software. # installp -ac -d VRTSvcssvc.rte.bff VRTSvcssvc
All the required software components have now been installed. You should be able to list out the filesets
that are mentioned in the “Appendix D: Veritas Software file sets listing” on each application host.
Installing and configuring Oracle
Installing and configuring Oracle involves the following tasks:
Installation of Oracle software
Creation of an Oracle instance
Creation of the database
First, you need to install Oracle on all nodes in both clusters. Make sure that the installation prerequisites
are met and are identical on all nodes, especially the user and group ID, passwords, owner and group
permissions and listener port ID. Refer to the appropriate “Appendix B: Setting up the database
applications” section for instructions to setup the database.
In this configuration, a database representing testmm schema is built. A database workload application is
used to populate and simulate a Transaction Processing Performance Council OLTP (TPC-C) workload.
You will need a workload application to exercise the database load.
Having installed and configured the base system software and applications, the test team is now ready for
configuring the required applications for high availability and disaster recovery. Most clustered
applications can be adapted to a disaster recovery environment by:
Setting up the Storwize V7000 remote copy partnership
Creating Storwize V7000 Metro Mirror, Storwize V7000 Global Mirror relationship, and FlashCopy mappings
Setting up VCS Global Custer
Configuring VCS application service group
Adding the VCS SAN Volume Controller copy services resource
Setting up
To quickly setup a similar test configuration, follow the steps in the “Quick setup” section. You can also
follow the procedure in the “General configuration steps” section and refer to the documents mentioned in
those sections for detailed configuration steps.
Quick setup
The following section lists the steps to configure a new SFHA cluster using the provided sample VCS
configuration file from “Appendix C: Sample main.cf file - isvp_sfha_clusterA”.
1. Make sure that all of objects mentioned in VCS configuration file are created and available. 2. Halt the cluster server from any node in the clusters in Site A and Site B
#/opt/VRTSvcs/hastop –all 3. Cut and paste the VCS configuration file (main.cf) from Appendix C: Sample main.cf file -
isvp_sfha_clusterA to /etc/VRTSvcs/conf/config directory as shown here.
On cluster nodes isvp12, isvp13 in Site A as: main.cf.siteA On cluster nodes isvp14, isvp15 in Site B as: main.cf.siteB
4. Modify the values of hostnames, IP addresses, mount points, volumes and disk group resources,
cluster names, passwords, and so on to match your site specific configuration. 5. Run the following commands to copy the VCS agent type definition files, if they are not present in
the destination directory. #cp /etc/VRTSagents/ha/conf/Oracle/OracleTypes.cf /etc/VRTSvcs/conf/config/ #cp /etc/VRTSvcs/conf/SVCCopyServicesTypes.cf /etc/VRTSvcs/conf/config/
6. Run the following commands to overwrite the existing main.cf file on the primary node on each of
the respective clusters, then use remote shell or SSH to copy that file to its local partner. For example main.cf on Site A cluster node 1 (isvp12) #cd /etc/VRTSvcs/conf/config
#rcp main.cf isvp13:/etc/VRTSvcs/conf/config/main.cf On Site B cluster node 1 (ispv14) #cd /etc/VRTSvcs/conf/config #cp main.cf.siteB main.cf
#rcp main.cf isvp15:/etc/VRTSvcs/conf/config/main.cf 7. Run the following command to verify that the main.cf does not have any errors and fix it if there
are any issues.
#/opt/VRTSvcs/bin/hacf –verify If there are no errors, command returns with exit code 0.
8. Run the following commands to start the cluster on each node in the clusters in Site A and Site B.
9. Start the cluster manager from any node in the cluster Site A. Login to one of the nodes with the
administrator userID as admin and password as password. 10. On the first node at Site A, login to the cluster manager and bring the app_grp1 service group
online if it is not already online. 11. Now you are ready to manage the clusters from the cluster manager GUI. To test HA and DR
scenarios, proceed with the instructions in the “Failover scenarios” section.
General configuration steps
In this section, the steps mentioned in the setup are explained in detail. Refer to the documents listed in
Table 3 for additional details of configuration procedures. You will need the following guides:
Veritas Cluster Server User’s Guide
Veritas Cluster Server Bundled Agents Reference Guide
Veritas Cluster Server Agent for Oracle Installation and Configuration Guide
Veritas Cluster Server Agent for IBM SVCCopyServices installation and Configuration Guide
Setting up Storwize V7000 cluster partnership
Refer to “Setting up the Storwize V7000 remote copy partnership” section for more information.
Configuring Storwize V7000 Metro Mirror, GlobalMirror, and FlashCopy
relationship
For more details, refer to the following sections: “Creating Storwize V7000 Metro Mirror, Global Mirror,
and FlashCopy consistency groups” and “Creating Storwize V7000 Metro Mirror, Storwize V7000 Global
Refer to the section Linking clusters in the Veritas Cluster Server User’s Guide. The Remote Cluster
Configuration wizard provides an easy interface to link clusters. Before linking clusters, verify that the
virtual IP address for the ClusterAddress attribute for each cluster is set. Use the same IP address as the
one assigned to the IP resource in the ClusterService group. Have the following information ready: The
active host name or IP address of each cluster in the global configuration and of the cluster being added
to the configuration, the administrator login name and password for each cluster in the configuration.
The VCS cluster management GUI (Java™ Console) is no longer packaged in the latest 5.1 release of
VCS. If you attempt to issue the command used to manage previous installations, you will see the
following message:
isvp12> /opt/VRTSvcs/bin/hagui & [1] 13369510 isvp12> VCS Single Cluster Manager (Java Console) is no longer packaged with VCS. Symantec recommends use of the Veritas Operations Manager (VOM) to manage, monitor and report on multi-cluster environments. You can download VOM at http://go.symantec.com/vom . If you wish to continue using the VCS Single Cluster Manager, you can get it at no charge at the http://go.symantec.com/vcsm_download website.
If you elect to download and use the VCS single-cluster manager, you can still click Edit Add/Delete
Remote Cluster or alternatively use the Veritas Operations Manager (VOM)
Configuring global cluster
From any node in the clusters in Site A and Site B, run the GCO Configuration wizard to create or update
the ClusterService group. The wizard verifies your configuration and validates it for a global cluster setup.
#/opt/VRTSvcs/bin/gcoconfig
The wizard discovers the NIC devices on the local system and prompts you to enter the device to be used
for the global cluster.
Specify the name of the device and press Enter. If you do not have NIC resources in your configuration,
the wizard asks you whether the specified NIC will be the public NIC used by all systems. Enter y if it is
the public NIC; otherwise enter n. If you entered n, the wizard prompts you to enter the names of NICs on
all systems.
Enter the virtual IP to be used for the global cluster, which you have already identified. If you do not have
IP resources in your configuration, the wizard prompts you for the netmask associated with the virtual IP.
The wizard detects the netmask; you can accept the suggested value or enter another value.
The wizard starts running the commands to create or update the ClusterService group. Various
messages indicate the status of these commands. After running these commands, the wizard brings the
Attribute definitions for the SVCCopyServices agent
Review the description of the agent attributes.
Required attributes
You need to assign values to the required attributes.
Attribute Description
GroupName Name of the replication relationship or consistency group that is managed by the agent.
Type-dimension string-scalar
IsConsistency Group
Indicates whether the value specified in the GroupName attribute is the name of a single replication relationship or of a consistency group consisting of several replication relationships. Attribute value is either 0 or 1. Default is 1.
Type-dimension string-scalar
SSHBinary
Contains the absolute path to the SSH binary. SSH is the mode of communication with the SAN Volume Controller cluster that is connected to the node. Default is /usr/bin/ssh.
Type-dimension string-scalar
SSHPathToID File
Contains the absolute path to the identity file used for authenticating the host with the SAN Volume Controller cluster. The corresponding public key must be uploaded on the SAN Volume Controller cluster so that the SAN Volume Controller cluster can correctly authenticate the host.
SVCClusterIP Is the IP address of the SAN Volume Controller cluster in the dot notation. The agent uses this IP address to communicate with the SAN Volume Controller cluster.
Type-dimension string-scalar
SVCUserName
Is the user name that authenticates the SSH connection with the SAN Volume Controller cluster. Default is admin.
StopTakeover
Determines whether the agent makes read-write access available to the host when the replication is in a stopped state (that is consistent_stopped). The status of the replication goes into a stopped state when the user fires the stoprcrelationship or the stoprcconsistgrp command. Thus, no replication occurs between the primary and secondary SAN Volume Controller clusters. Attribute value is either 0 or 1. Default value is 0. If it is set to 1, there is a possibility for data loss if after the replication was stopped, the application continues to write data on the primary cluster. Thus, when the agent enables read-write access on the secondary SAN Volume Controller cluster, the secondary SAN Volume Controller cluster does not have up-to-date data on it. The possible stopped states are: inconsistent_stopped and consistent_stopped When the state of the replication is consistent_stopped and StopTakeover = 1, the agent enables read-write access for the SAN Volume Controller cluster. When the state of the replication is inconsistent_stopped, the
agent does not enable read-write access for the SAN Volume Controller cluster.
Type-dimension string-scalar
Disconnect Takeover
Determines whether the agent makes read-write access available to the host when the replication is in a disconnected state (that is consistent_disconnected). The status of the replication goes into a disconnected state when the primary and secondary SAN Volume Controller clusters lose communication with each other. Thus, no replication occurs between the primary and secondary SAN Volume Controller clusters. Attribute value is either 0 or 1. Default is 0. The possible disconnected states are:
idling_disconnected
inconsistent_disconnected
consistent_disconnected When the state of the replication is consistent_disconnected and DisconnectTakeover = 1, the agent enables read-write access for the SAN Volume Controller cluster. When the state of the replication is idling_disconnected, the agent does not enable read-write access for the SAN Volume Controller cluster.
Attribute definitions for the SVCCopyServicesSnap agent
Review the description of the agent attributes. You can find additional details on the
SVCCopyServicesSnap agent in the Veritas Cluster Server Agent for IBM SVCCopyServices Installation
and Configuration Guide, mentioned in Table 3.
Required attributes
You must assign values to required attributes.
Attribute Description
TargetResName Name of the resource managing the LUNs that you want to take a snapshot of. Set this attribute to the name of the SVCCopyServices resource if you want to take a snapshot of replicated data. Set this attribute to the name of the DiskGroup resource if the data is not replicated. For example, in a typical Oracle setup, you might replicate data files and redo logs, but you might choose to avoid replicating temporary tablespaces. The temporary tablespace must still exist at the DR site and might be part of its own disk group.
UseSnapshot Specifies whether the SVCCopyServicesSnap resource takes a
local snapshot of the target array. Set this attribute to 1 for Gold and Silver configurations. For Bronze, set this attribute to 0.
Type-dimension integer-scalar
RequireSnapshot Specifies whether the SVCCopyServicesSnap resource must take a snapshot before coming online. Set this attribute to 1 if you want the resource to come online only after it succeeds in taking a snapshot. Set this attribute to 0 if you do want the resource to come online even if it fails to take a snapshot. Setting this attribute to 0 creates the Bronze configuration. Note: Set this attribute to 1 only if UseSnapshot is set to 1
Type-dimension integer-scalar
MountSnapshot Specifies whether the resource uses the snapshot to bring the service group online. Set this attribute to 1 for Gold configuration. For Silver and Bronze configurations, set the attribute to 0. Note: Set this attribute to 1 only if UseSnapshot is set to 1.
Type-dimension integer-scalar
Responsibility
Do not modify. For internal use only. Used by the agent to keep track of desynchronizing snapshots.
Type-dimension temporary string
FCMapGroupName Name of the FlashCopy mapping or FlashCopy consistency group. If the target SVCCopyServices resource contains a consistency group, set FCMapGroupName to a FlashCopy consistency group. If the target SVCCopyServices resource contains a relationship, set FCMapGroupName to a FlashCopy mapping. This attribute is optional for Bronze configurations.
mkdir -p /u01/app/oracle/admin/testmm/adump mkdir -p /u01/app/oracle/admin/testmm/dpdump mkdir -p /u01/app/oracle/admin/testmm/pfile mkdir -p /u01/app/oracle/cfgtoollogs/dbca/testmm mkdir -p /u01/app/oracle/product/11.2.0/dbhome_1/dbs mkdir -p /v7k_mm_testmount/testmm umask ${OLD_UMASK} ORACLE_SID=testmm; export ORACLE_SID PATH=$ORACLE_HOME/bin:$PATH; export PATH echo You should Add this entry in the /etc/oratab: testmm:/u01/app/oracle/product/11.2.0/dbhome_1:Y /u01/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus /nolog @/u01/app/oracle/admin/testmm/scripts/testmm.sql
$ cat testmm.sql set verify off ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE host /u01/app/oracle/product/11.2.0/dbhome_1/bin/orapwd file=/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwtestmm force=y @/u01/app/oracle/admin/testmm/scripts/CloneRmanRestore.sql @/u01/app/oracle/admin/testmm/scripts/cloneDBCreation.sql @/u01/app/oracle/admin/testmm/scripts/postScripts.sql @/u01/app/oracle/admin/testmm/scripts/lockAccount.sql @/u01/app/oracle/admin/testmm/scripts/postDBCreation.sql
$ cat CloneRmanRestore.sql SET VERIFY OFF connect "SYS"/"&&sysPassword" as SYSDBA set echo on spool /u01/app/oracle/admin/testmm/scripts/CloneRmanRestore.log append startup nomount pfile="/u01/app/oracle/admin/testmm/scripts/init.ora"; @/u01/app/oracle/admin/testmm/scripts/rmanRestoreDatafiles.sql; spool off
CreateDBFiles.sql connect SYS/&&sysPassword as SYSDBA
set echo on spool /oracle/orahome/assistants/dbca/logs/CreateDBFiles.log CREATE TABLESPACE "USERS1" LOGGING DATAFILE '/oradata/&&DBNAME/mnt2/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "USERS2" LOGGING DATAFILE '/oradata/&&DBNAME/mnt3/users02.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; CREATE TABLESPACE "USERS3" LOGGING DATAFILE '/oradata/&&DBNAME/mnt4/users03.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K
MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ; spool off
$ cat cloneDBCreation.sql SET VERIFY OFF connect "SYS"/"&&sysPassword" as SYSDBA set echo on spool /u01/app/oracle/admin/testmm/scripts/cloneDBCreation.log append Create controlfile reuse set database "testmm" MAXINSTANCES 8 MAXLOGHISTORY 1 MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 Datafile '/v7k_mm_testmount/testmm/system01.dbf', '/v7k_mm_testmount/testmm/sysaux01.dbf', '/v7k_mm_testmount/testmm/undotbs01.dbf', '/v7k_mm_testmount/testmm/users01.dbf' LOGFILE GROUP 1 ('/v7k_mm_testmount/testmm/redo01.log') SIZE 51200K, GROUP 2 ('/v7k_mm_testmount/testmm/redo02.log') SIZE 51200K, GROUP 3 ('/v7k_mm_testmount/testmm/redo03.log') SIZE 51200K RESETLOGS; exec dbms_backup_restore.zerodbid(0); shutdown immediate; startup nomount pfile="/u01/app/oracle/admin/testmm/scripts/inittestmmTemp.ora"; Create controlfile reuse set database "testmm" MAXINSTANCES 8 MAXLOGHISTORY 1 MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 Datafile '/v7k_mm_testmount/testmm/system01.dbf', '/v7k_mm_testmount/testmm/sysaux01.dbf', '/v7k_mm_testmount/testmm/undotbs01.dbf', '/v7k_mm_testmount/testmm/users01.dbf' LOGFILE GROUP 1 ('/v7k_mm_testmount/testmm/redo01.log') SIZE 51200K, GROUP 2 ('/v7k_mm_testmount/testmm/redo02.log') SIZE 51200K, GROUP 3 ('/v7k_mm_testmount/testmm/redo03.log') SIZE 51200K RESETLOGS; alter system enable restricted session; alter database "testmm" open resetlogs; exec dbms_service.delete_service('seeddata'); exec dbms_service.delete_service('seeddataXDB'); alter database rename global_name to "testmm"; ALTER TABLESPACE TEMP ADD TEMPFILE '/v7k_mm_testmount/testmm/temp01.dbf' SIZE 20480K REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED; select tablespace_name from dba_tablespaces where tablespace_name='USERS'; select sid, program, serial#, username from v$session; alter database character set INTERNAL_CONVERT WE8MSWIN1252; alter database national character set INTERNAL_CONVERT AL16UTF16; alter user sys account unlock identified by "&&sysPassword"; alter user system account unlock identified by "&&systemPassword"; alter system disable restricted session;