Oracle 12c MultiTenant
James Anthony | Technology Director
DBaaS goals
• Reduce service catalogue– Better support & maintenance model– “evergreening” / Security and patch management
• Cloud like deployment– Self Service– Charge Back / Show Back– Template based
Consolidation Challenges
• Improve over server virtualisation• No application awareness• The common criticism of schema led consolidation• Improve not harm performance• Resource management & isolation • Consolidation can’t risk SLAs• Simplify • Especially patching and upgrades
Approaches & Challenges
• Schema as a Service / Schema Consolidation– Highest density– Good supportability
– Lack of isolation– Vendor / Application Support and co-existence
Approaches & Challenges
• Virtual Machines– Fits well with existing virtualisation strategies– Isolation (perhaps not at IO level)
– VM Sprawl & same space usage– Non-database aware
– Lots of additional scripting– Limited DBA productivity enhancement
Approaches & Challenges
• Database Instances– Good density (reduced number of OS images)– Enabled through Grid Control etc. tooling– Isolation
– Still (possibly) large number of DB instances– Upgrade and patching cycle
12c Multi-Tenant
• A consolidation engine• Improve on hardware virtualisation where each
OS image has an overhead• Multiple instances on a server has overheads for
each instance
Components
• Container Database– A logical container NOT
something a “user” connects to
– Administrator connects to & works at this level
– Each instance in a RAC cluster opens the whole CDB
• Pluggable Database– Fully compatible with pre-
12c – Multiple PDBs within a
single CDB– Resource management
extended to between PDB
– Integrated at EM and SQL Developer level
Architecture
• Pristine Oracle RDBMS• Only one reality
OBJ$ TAB$SOURCE$
Architecture
• Once use objects and data go in the system becomes “polluted”
OBJ$ TAB$SOURCE$ DEPTEMP
Architecture
OBJ$ TAB$SOURCE$
OBJ$ TAB$SOURCE$
DEPTEMP
• Separation of Application and System– ORACLE only metadata– Application only metadata
Pluggable Databases
• Other Benefits– Rapid provisioning & cloning– Re-provision (unplug and re-plug)
• Single upgrade for all PDBs• Single backup
– Recovery available also at PDB level
Provisioning
Root CDB
PDB
Provisioning
• Similar to backup• On clone triggers• Copy on change FileSystems mean
near instant cloning• Clone across CDBs using intra PDB
links
Creating from the seed
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED---------- -------------------- ---------- ----------
2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO
SQL> create pluggable database PDB2 admin user pdbadmin identified by pdbadmin storage (maxsize 5g);
Pluggable database created.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED---------- -------------------- ---------- ----------
2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO 4 PDB2 MOUNTED
Now let’s clone a PDB
1 create pluggable database pdb3 from pdb1 2 file_name_convert = ( 3 '+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/system.278.881827707', '+DATA/CDB/PDB3/DATAFILE/system01.dbf', 4 '+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/sysaux.277.881827707', '+DATA/CDB/PDB3/DATAFILE/sysaux01.dbf', 5 '+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/users.280.881827735', '+DATA/CDB/PDB3/DATAFILE/users01.dbf', 6* '+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/TEMPFILE/temp.279.881827721', '+DATA/CDB/PDB3/TEMPFILE/temp01.dbf')SQL> /
Pluggable database created.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED------ -------------------- ---------- ----------
2 PDB$SEED READ ONLY NO 3 PDB1 READ ONLY NO 4 PDB2 READ WRITE NO 5 PDB3 MOUNTED
What’s with the names?
• Did you notice my PDB name was PDB1 ?• Then what’s with the file names?
'+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/system.278.881827707’
• Then what’s with the file names?
SELECT guid FROM V$CONTAINERS WHERE con_id=3; GUID 1804031B0E397EFEE0531D84C80AC19F
Instant Provisioning
• With copy-on-change filesystems near instantaneous cloning
Root CDB
PDBNew
“clone” PDB
Using ACFS as copy-on-change FS
Step 1) use standard cloning to put our new PDB into ACFS
SQL> create pluggable database ACFSPDB from PDB1 2 file_name_convert = ( 3 '+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/system.278.881827707', '/acfs/oradata/CDB/ACFSPDB/DATAFILE/system01.dbf', 4 '+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/sysaux.277.881827707', '/acfs/oradata/CDB/ACFSPDB/DATAFILE/sysaux01.dbf', 5 '+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/users.280.881827735', '/acfs/oradata/CDB/ACFSPDB/DATAFILE/users01.dbf', 6 '+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/TEMPFILE/temp.279.881827721', '/acfs/oradata/CDB/ACFSPDB/TEMPFILE/temp01.dbf');
Pluggable database created.
• Step 2) Use the SNAPSHOT CLONE keywords to create a new clone
SQL> create pluggable database acfspdb3 from acfspdb 2 file_name_convert = ('ACFSPDB', 'ACFSPDB3') 3 snapshot copy;
Pluggable database created.
Elapsed: 00:00:09.75
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO 3 PDB1 READ ONLY NO 4 PDB2 READ WRITE NO 5 PDB3 READ WRITE NO 6 ACFSPDB READ ONLY NO
SQL> select sum(bytes)/1024/1024 from cdb_data_files where con_id = 6;
SUM(BYTES)/1024/1024--------------------
870
How Big?
Viewing the snapshot
• We can use acfsutil to see the snapshots for a given file system
[root@multitenant ~]# acfsutil snap info /acfssnapshot name: 18282697471F0787E0531D84C80A9097snapshot location: /acfs/.ACFS/snaps/18282697471F0787E0531D84C80A9097RO snapshot or RW snapshot: RWparent name: /acfssnapshot creation time: Wed Jun 10 03:18:57 2015
number of snapshots: 1 snapshot space usage: 38871040 ( 37.07 MB )
What about a larger PDB?
Un-Plug / Re-Plug
CDB (12.1.0.1)
PDB1
CDB (12.1.0.2)
PDB2
Unplug/plug
• Simple process– ALTER PLUGGABLE DATABASE <> UNPLUG INTO <>;– CREATE PLUGGABLE DATABASE <> USING <>
FILE_NAME_CONVERT = <> COPY/NOCOPY;
• Useful points to remember:– You can re-plug back into the original– If the upgrade/plug fails back out is simple
Cloning on different nodes!
CDB Node A
PDB1
CDB Node B
PDB2
Cloning on different nodes!
CDB Node A
PDB1
CDB Node B
PDB2
Thin cloning to another node
Thin cloning to another node
SQL> create database link STD1 connect to system identified by manager using 'STD1';
Database link created
SQL> create pluggable database SPDB2 from SPDB1@STD1 FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/STD1/SPDB1', '/u01/app/oracle/oradata/STD2/SPDB2') SNAPSHOT COPY;
Pluggable database created.
Thin cloning to another node
SQL> alter pluggable database spdb2 open; Pluggable database altered.
SQL> alter session set container=SPDB2;Session altered.
SQL> select file_name from dba_data_files;
FILE_NAME -------------------------------------------------------------------- /u01/app/oracle/oradata/STD2/SPDB2/system01.dbf /u01/app/oracle/oradata/STD2/SPDB2/sysaux01.dbf /u01/app/oracle/oradata/STD2/SPDB2/SPDB1_users01.dbf
Cloning on different nodes!
[root@rac12c-node1 ~]# /sbin/acfsutil info fs
/u01
ACFS Version: 12.1.0.2.0
on-disk version: 43.0
flags: MountPoint,Available
mount time: Mon Apr 6 08:07:16 2015
allocation unit: 4096
volumes: 1
total size: 64424509440 ( 60.00 GB )
total free: 38376742912 ( 35.74 GB )
primary volume: /dev/asm/acfs-258
label:
…
number of snapshots: 1
snapshot space usage: 120287232 ( 114.71 MB )
replication status: DISABLED
CDB Node A
PDB1
CDB Node B
PDB2
For more information…
• http://www.redstk.com/snapcloning-a-remote-pdb-in-12c/
Backup (rman) operations
• Connected to the CDB RMAN allows you to backup– The entire CDB– Just the root – One or more PDBs (with a single command)
• BACKUP PLUGGABLE DATABASE pdb1, pdb2;– Individual Data Files (File IDs are unique @ CDB)– Archive logs (these are at CDB level)
• When connected directly to the PDB– Backup tablespaces– Datafiles within the PDB
• What’s missing (in my opinion)– An exclude clause
Recovery (rman) operations
• Can recover– Entire CDB– Just the root (not recommended!)– Individual PDBs
• RESTORE PLUGGABLE DATABASE pdb1, pdb2;• RECOVER PLUGGABLE DATABASE pdb1, pdb2;
DataGuard and Multi-tenant
• Multi-tenant WILL impact your usage of DataGuard in a real life situation• First let’s look at the impact of creating new PDBs
Let’s create a new PDB
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED---------- -------------------- ---------- ----------
2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO
SQL> create pluggable database PDB2 admin user pdbadmin identified by pdbadmin storage (maxsize 5g);
Pluggable database created.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED---------- -------------------- ---------- ----------
2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO 4 PDB2 MOUNTED
The impact on the standby
• Before…SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED---------- ------------------ ---------- ----------
2 PDB$SEED MOUNTED 3 PDB1 MOUNTED
• After …SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED---------- ------------------ ---------- ----------
2 PDB$SEED MOUNTED 3 PDB1 MOUNTED 4 PDB2 MOUNTED
What do my Datafiles look like?
SQL> select name from v$datafile where con_id = 3;
NAME--------------------------------------------------------------------------------+DATA/CDBDR/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/system.348.881940701+DATA/CDBDR/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/sysaux.349.881940709+DATA/CDBDR/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/users.350.881940723
• So far so good!
Now let’s clone a PDB
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED---------- ----------------------- ---------- ----------
2 PDB$SEED READ ONLY NO 3 PDB1 READ ONLY NO 4 PDB2 READ WRITE NO
SQL> select file_name from cdb_data_files where con_id = 3;
FILE_NAME--------------------------------------------------------------------------------+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/system.278.881827707+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/sysaux.277.881827707+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/DATAFILE/users.280.881827735
SQL> select file_name from cdb_temp_files where con_id = 3;
FILE_NAME--------------------------------------------------------------------------------+DATA/CDB/1804031B0E397EFEE0531D84C80AC19F/TEMPFILE/temp.279.881827721
Primary Datafiles?
SQL> select name from v$datafile where con_id = 5;
NAME-----------------------------------------------------------+DATA/CDB/PDB3/DATAFILE/system01.dbf+DATA/CDB/PDB3/DATAFILE/sysaux01.dbf+DATA/CDB/PDB3/DATAFILE/users01.dbf
What about my standby?
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED---------- ----------------- ---------- ----------
2 PDB$SEED MOUNTED 3 PDB1 MOUNTED 4 PDB2 MOUNTED 5 PDB3 MOUNTED
SQL> select name from v$datafile where con_id = 5;
NAME-------------------------------------------------------------+DATA/CDBDR/pdb3/datafile/system01.dbf
Alert Log
Tue Jun 09 16:01:36 2015Errors in file /u00/app/oracle/diag/rdbms/cdbdr/CDBDR/trace/CDBDR_pr00_1535.trc:ORA-01274: cannot add data file that was originally created as '+DATA/CDB/PDB3/DATAFILE/system01.dbf'Managed Standby Recovery not using Real Time ApplyRecovery interrupted!Recovery stopped due to failure in applying recovery marker (opcode 17.34).Datafiles are recovered to a consistent state at change 1622173 but controlfile could be ahead of datafiles.Tue Jun 09 16:01:36 2015Errors in file /u00/app/oracle/diag/rdbms/cdbdr/CDBDR/trace/CDBDR_pr00_1535.trc:ORA-01274: cannot add data file that was originally created as '+DATA/CDB/PDB3/DATAFILE/system01.dbf'Tue Jun 09 16:01:36 2015MRP0: Background Media Recovery process shutdown (CDBDR)Tue Jun 09 16:01:36 2015Checker run found 1 new persistent data failures
Fixing is easy (ish)
• Step 1) Take a copy of my missing datafile(s)
RMAN> copy datafile 12 to '/tmp/backup_file_12.dbf';Starting backup at 09-JUN-15using target database control file instead of recovery catalogallocated channel: ORA_DISK_1channel ORA_DISK_1: SID=125 device type=DISKchannel ORA_DISK_1: starting datafile copyinput datafile file number=00012 name=+DATA/CDB/PDB3/DATAFILE/system01.dbfoutput file name=/tmp/backup_file_12.dbf tag=TAG20150609T180114 RECID=1 STAMP=881949681channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15Finished backup at 09-JUN-15
Starting Control File and SPFILE Autobackup at 09-JUN-15piece handle=+DATA/CDB/AUTOBACKUP/2015_06_09/s_881949689.381.881949691 comment=NONEFinished Control File and SPFILE Autobackup at 09-JUN-15
Fixing is easy (ish)
• Step 2) Transfer and catalog the backup copy @ standby
RMAN> Catalog datafilecopy '/tmp/backup_file_12.dbf';
Starting implicit crosscheck backup at 09-JUN-15using target database control file instead of recovery catalogallocated channel: ORA_DISK_1channel ORA_DISK_1: SID=364 device type=DISKCrosschecked 2 objectsFinished implicit crosscheck backup at 09-JUN-15
Starting implicit crosscheck copy at 09-JUN-15using channel ORA_DISK_1Finished implicit crosscheck copy at 09-JUN-15
searching for all files in the recovery areacataloging files...cataloging done
List of Cataloged Files=======================File Name: +DATA/CDBDR/CONTROLFILE/current.333.881936085File Name: +DATA/CDBDR/CONTROLFILE/current.336.881940437
cataloged datafile copydatafile copy file name=/tmp/backup_file_12.dbf RECID=21 STAMP=881949791
Fixing is easy(ish)
• Step 3) Get the backup into ASM – Get the FILE ID from v$datafile where con_id = <CONTAINER ID>
on the standbyRMAN> Backup as copy datafile 12 format '+DATA';
Starting backup at 09-JUN-15using channel ORA_DISK_1channel ORA_DISK_1: starting datafile copyinput datafile file number=00012 name=/tmp/backup_file_12.dbfoutput file name=+DATA/CDBDR/181E72077BFB06B5E0531D84C80A4130/DATAFILE/system.382.881949881 tag=TAG20150609T180441 RECID=22 STAMP=881949883channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03Finished backup at 09-JUN-15
Fixing is easy(ish)
• Step 4) Now just run a switch
RMAN> switch datafile 12 to COPY;
datafile 12 switched to datafile copy "/tmp/backup_file_12.dbf"
Now we can carry on…
SQL> select file#, name from v$datafile where con_id = 5;
FILE# NAME---------- --------------------------------------------------------------------------
12 +DATA/CDBDR/181E72077BFB06B5E0531D84C80A4130/DATAFILE/system.382.881949881
SQL> alter database recover managed standby database disconnect 2 /
Database altered.
SQL> select file#, name from v$datafile where con_id = 5;
FILE# NAME------ --------------------------------------------------------------------------
12 +DATA/CDBDR/181E72077BFB06B5E0531D84C80A4130/DATAFILE/system.382.88194988112 +DATA/CDBDR/pdb3/datafile/sysaux01.dbf
Other options
• Active DataGuard will do the job for us!• We can use the (very handy) asmcmd to “cp” files across between the
two nodes– Datafiles must be offline when we do this– We can orchestrate as part of the cloning operation– BUT… it will extend the time the new PDB is unavailable
ASMCMD> cp +DATA/PCDB/OLA/DATAFILE/users01.dbf [email protected].+ASM:+DATA/CDBDR/OLSDATAFILE/users01.dbf Enter password: ******** copying +DATA/PCDB/OLA/DATAFILE/users01.dbf -> 12c-shared.redstk.com:+DATA/CDBDR/OLA/DATAFILE/users01.dbf
Failover to standby
• Redo apply is always at CDB level• Any change of role is at the whole CDB level
Recap
• Other Benefits– Rapid provisioning & cloning– Re-provision (unplug and re-plug)
• Single upgrade for all PDBs• Single backup
– Recovery available also at PDB level• Cloning of “seed” PDB for ISVs
Resource Protection
• Prevent run-away processes• Between PDBs / Within PDBs
– CPU– Parallelism– IO (Exadata only)
Container DatabaseHigh Priority
Medium Priority
Low Priority
DW
CRM
ERP
Multitenant New Features in 12.1.0.2
• Subset by tablespace• Metadata-only clone• Remote clone (including
snapshots)• File system-agnostic
cloning via dNFS (clonedb = true)
• New SQL clause to aggregate data across PDBs
select ENAME from containers(scott.EMP)where CON_ID in (45, 49);
• New “standbys” clause • (all | none)
• Nologging clause at PDB level
• Flashback data archive, transaction query & backout
• Temporal SQL Support• Compatible with DB In-
Memory• Maintains state of PDBs
between CDB restarts
Cloning
SQL
Cross PDB Queries
Standby & Logging
PRIMARY STANDBY
Additional Features
Our Experience
• Reduction in hardware– Replaced 4 servers with a single server – Favour memory reduction over CPU
• Reduction in backup costs/time– Single backup to manage & report on
• Reduction in management time– # of tickets reduced substantially– 72% reduction in number of tickets over 6 month period vs. previous
• Reduction in services time – Cloning (90% reduction in effort)– Setup and config (95% reduction in effort)
What did we learn?
• It’s like rman not like export/import• Shifting platforms is a breeze• IO resource management would be really handy• Just like any PaaS/IaaS debate you’ll still get server huggers• UTL_FILE and shared filesystems with app server• Diagnostics pack should be mandatory
Red Stack Tech
27-30 Railway Street
Chelmsford
Essex
United Kingdom
CM1 1QS
Telephone: 01245 200510
Email: [email protected]
www.redstk.com
Red Stack Tech
27-30 Railway Street
Chelmsford
Essex
United Kingdom
CM1 1QS
Telephone: 01245 200510
Email: [email protected]
www.redstk.com