8/13/2019 Clariion ppt
1/121
SAN
CLARIION
8/13/2019 Clariion ppt
2/121
Clariion
Agenda
Introduction
Hardware overview
Software overview
Clariion Management
Clariion Configuration
Clariion Objects
Clariion Applications
8/13/2019 Clariion ppt
3/121
Clariion Timeline
8/13/2019 Clariion ppt
4/121
Clariion Timeline
All members of the CX family have a similar architecture. The main differences are the
number of front-end and back-end ports, the CPU types and speeds, and the amount
of memory per SP.
8/13/2019 Clariion ppt
5/121
Clariion Hardware
8/13/2019 Clariion ppt
6/121
Clarion Hardware ArchitectureDelivering Data and Application Availability
Fully redundant architecture
SP, cooling, data paths, SPS
Non-stop operation
Online software upgrades
Online hardware changes
Continuous diagnostics
Data and system
CLARalert
Advanced data integrity
Mirrored write cache
De-stage write cache to DISKupon power failure
No single points of failure
Tiered capacity
FC and ATA disks
From five to 480 disks
Flexibility
Mix drive types, RAID levels
RAID levels 0, 1, 1+0, 3, 5
Up to 16 GB memory
Dual I/O paths with no disruptivefailover
8/13/2019 Clariion ppt
7/121
Clariion Architecture
Clariion Architecture is based on intelligent Storage Processors that manage physical
drives on the back-end and service host requests on the front-end. Depending on the
module, each Storage processor includes either one or two CPUs. Storage Processors
communicate to each other over the CLARiiON Messaging Interface (CMI).
Both the front-end connection to the host and the back-end connection to the physical
storage are 2Gb/4GB Fibre channel
8/13/2019 Clariion ppt
8/121
CLARIION Features
Data Integrity
How CLARiiON keeps data safe
(Mirrored write cache ,vault, etc)
Data Availability
Ensuring uninterrupted host access to data
(Hardware redundancy,pathfailover software(powerpath), Error reportingcapability)
CLARiiON Performance
What makes CLARiiON a great performer
(cache, Dual SPs , Dual/quad back-end FC buses )
CLARiiON Storage Objects
A first look at LUNs, and access to them
( RAID Groups, LUNs , MetaLUNs,Storage Groups)
8/13/2019 Clariion ppt
9/121
Modular Building Blocks in Storage system
The CLARiiON storage system is based upon a modular architecture. There are four
building blocks in a Clariion.
DPE - Disk Processor EnclosureContains both disks and processor
DAE - Disk Array EnclosureContains disks only
SPE - Storage Processor EnclosureContains storage processor
SPS - Standby Power SupplyProvide battery backup protection
8/13/2019 Clariion ppt
10/121
( k l )
8/13/2019 Clariion ppt
11/121
DAE (Disk Array Enclosure)
Disk Status LEDs
Green for connectivity
Blinks during disk activity
Amber for Fault
Enclosure Status LEDs
Green = Power Amber = Fault
8/13/2019 Clariion ppt
12/121
DAE (Disk Array Enclosure)
8/13/2019 Clariion ppt
13/121
DAE
8/13/2019 Clariion ppt
14/121
SPA (Storage Processor Enclosure)
Front View of SPA
SPE
8/13/2019 Clariion ppt
15/121
SPE
Rear view of SPE
8/13/2019 Clariion ppt
16/121
SPS (Standby Power Supplies)
The CLARiiON is powered on or off using the switch on the SPS.
The RJ11 connection is to the Storage processor and used to communicate lost ofAC power and signals the SP to begin the vault operation. Once the vault operationis complete, the SP signals the SPS that it is OK to remove AC power
Note: Until the batteries are fully charged, write caching will be disabled
8/13/2019 Clariion ppt
17/121
8/13/2019 Clariion ppt
18/121
8/13/2019 Clariion ppt
19/121
8/13/2019 Clariion ppt
20/121
DAE-OS Front view
The DAE-OS contains slots for 15 dual-ported Fibre Channel disk drives. The first five
drives are referred to as the Vault drives.
Disks 0-3 required to boot the Storage Processors Disks 0-4 required to enable write caching
These disks must remain in the original slots!
The DAE-OS enclosure must be connected to bus zero and assigned the enclosure
address 0.
8/13/2019 Clariion ppt
21/121
Private Space
/
8/13/2019 Clariion ppt
22/121
Private space on Vault/Code Drives
The first five drives in DAE are called code drives
They are also used for vaulting purpose.
6.5 GB of each drive of code drives is reserved to store Flare image, SPA and SPB bootimages and for PSM LUN and Vaulting purpose
Flare is triple mirrored
PSM LUN triple mirrored
Vault:
Vault is a reserved area found on 1stnine disks of DPE in FC series and 1stfive disks ofDPE on CX series.
Data in write cache is dumped to the vault area in power failure emergency.
Once the system is turned on vault transfers dumped data back to cache
PSM LUN:
Persistent Storage Manager LUN ,created at the time of initialization by Navisphere PSM is a hidden LUN where the records of configuration information and access logix
database are stored.
It resides in the first three disks of code drives
Both SPs can access a single PSM and update themselves with new configurations via
Clariion Messaging interface(CMI)
Cl ii O i
8/13/2019 Clariion ppt
23/121
Clariion Operating
Environment
The CLARiiON arrays boot operating system is either Windows NT or Windows XP
depending on the processor modelAfter booting each SP Executes FLARE software. FLARE software manages all
functions of the CLARiiON storage system(provisioning, resource allocation, Memory
management etc.
Access Logix software is optional software that runs within the FLARE operating
environment on each storage processor (SP).It is used for LUN maskingNavisphere provides a centralized tool to monitor, configure, and analyze
performance of clariion storage systems.
CLARiiON can also be managed as part of EMC ControlCenter, allowing full end-to-end
management.
Other array software includes SnapView, MirrorView, and SANCopy.
8/13/2019 Clariion ppt
24/121
Clariion Management
B i Cl ii M t
8/13/2019 Clariion ppt
25/121
Basic Clariion Management
EMC Navisphere Manager
Browser-based
Manages multiple storage systems and multiple hosts
Managing RAID Groups
Managing LUNs
Managing advanced functionality (Storage Groups, metaLUNs, SnapView,
MirrorView, SAN Copy etc)
Relies on host agent and SP agent
Single Point of Management (SPoM)
EMC Navisphere CLI / Secure CLI
Managing the storage system
Managing RAID Groups
Managing LUNs
Managing advanced functionality
S ft C t
8/13/2019 Clariion ppt
26/121
Software ComponentsSoftware Components
Array Software
Base (FLARE) code (with or without Access Logix)
Array agent
Management Server
Management UI
SnapView
MirrorView
SAN Copy
Management Station Software
Internet Explorer or Netscape
Java
Navisphere Management UI
ClarAlert
Host Software
Navisphere Host Agent
HBA drivers
PowerPath
Note: The Navisphere UI may run either on the management station or on the array.
8/13/2019 Clariion ppt
27/121
Initializing a ClarionInitializing an array refers to the setting of the TCP/IP network parameters and
establishing domain security.
Initialize array can be done using a serial connection and a point-to-point network
( Default IP http://192.168.1.1/setup)
We can set network parameters (IP,hostname,subnet mask,Gateway,peer IP(sp
A/B)
Further array configuration is performed using either GUI or CLI after the arrayhas been initialized.
Array name, access control, Fibre Channel link speeds, etc.
Additional domain users and Privileged User Lists
Read and write cache parameters
Storage objects:
RAID Groups
LUNs
Storage Groups
C t C i ti i i
8/13/2019 Clariion ppt
28/121
Component Communication in managing
the Clariion
8/13/2019 Clariion ppt
29/121
Clariion Management
In-Band Management
o
Out of Band Management
Naviagent converts SCSI calls to TCP/IP and TCP/IP to SCSI
FLARE FC FabricNavisphere
GUI(Management Host)
FlareNAVI
AGENT
MGMT
SERVER RJ-45TCP/IP Navisphere
GUI
(Management Host)
Clariion Management
8/13/2019 Clariion ppt
30/121
Clariion Management
Clariion Managemet
8/13/2019 Clariion ppt
31/121
Clariion Managemet
8/13/2019 Clariion ppt
32/121
Domain contains one Master and other storages are treated as slaves
We can configure name for Storage Domain( Default name: Domain Default)
Each storage system can be a member of only one domain
Navisphere Users
8/13/2019 Clariion ppt
33/121
Navisphere UsersThere are three roles of users:
AdministratorCan do anything including create and delete users. ManagerCan fully manage array but cannot modify/create/delete other users.
MonitorCan only look.
There are two scopes:
Local
Global
8/13/2019 Clariion ppt
34/121
Classic Navisphere CLI used a Privileged user list to authenticate user requests.
The Array Agents privileged users list does not include user1 and therefore the
request is denied.
8/13/2019 Clariion ppt
35/121
The privileged user list now includes user1 as a privileged user when logged in at IP
address 10.128.2.10.
8/13/2019 Clariion ppt
36/121
The Host Agent also uses its own privileged user list. This illustrates an attempt by
Management Server to restart the Host Agent on a computer whose IP address is
10.128.2.10. The Host Agent will refuse the command unless the array is listed as a
privileged user in agent.config.
8/13/2019 Clariion ppt
37/121
While an SP does not have a login user ID, the default user name of system is used for
the SP. The format of the privileged user list in Host Agents agent.config file is
system@.
Clariion configuration
8/13/2019 Clariion ppt
38/121
Clariion configuration Introduction to Navisphere Manager
Configure the Clariion Clarion Security ( Domain configuration and Creaing user A/Cs etc
Configure Cache, Verify available softwares, acess logix, Network
configuration, Verify SPs WWNs and setting SP agent privileged users etc)
Create RAID groups
Bind LUNS and MetaLUNs
Initiator Records and host registration
Access logix
Create storage groups
RAID groups and LUNS
8/13/2019 Clariion ppt
39/121
RAID groups and LUNSRAID Group:
RAID Group is a collection of Physical Drives from which an administrator may bind
one or more LUNs.
Once the first LUN is bound within a RAID group, all other LUNs will the RAID Group
will share the same protection scheme.
Using the Navisphere GUI and or CLI we can administer RAID groups(Create,
Expand, Destroy etc)LUN:
LUN is a Logical Unit
The process of creating a LUN is called Binding
When presented to a host it is assigned a Logical Unit Number and it appears to the
host as a disk drive
Using the Navisphere GUI and or CLI we can administer LUNs( Bind LUN, Changing
LUN properties, Unbinding LUN etc)
RAID groups and LUNs
8/13/2019 Clariion ppt
40/121
RAID groups and LUNs
MetaLUN:
Collection of individual LUNs that act in
together with, and are presented to, a hostor application as a single storage entity
Created by taking new and/or pre-existing
LUNs and logically connecting them
together
Expand existing volumes while on-line
Concatenated
Striped
Combined Stripe and Concatenated
MetaLUN Terminology
8/13/2019 Clariion ppt
41/121
MetaLUN TerminologyFLARE LUN (FLU)
A logical partition of a RAID group. The basic logical units managed by FLARE, which
serve as the building blocks for MetaLUN components.
MetaLUN
A storage volume consisting of two or more FLUs whose capacity grows dynamically
by adding FLUs to it
Component
A group of one or more FLARE LUNs that get concatenated to a MetaLUN as a single or
striped unit
Base LUN
The original FLARE LUN from which the MetaLUN is created. The MetaLUN is created
by virtue of expanding the base LUNs capacity.Note : The MetaLUN is presented to the host in exactly the same way it was before the
expansioni.e. the Name, LUN ID, SCIS ID, and WWN is the same. The only thing that
changed is the capacity is increased.
To Expand a LUN, right click on the LUN and select Expand This invokes the Storage
Wizard
8/13/2019 Clariion ppt
42/121
8/13/2019 Clariion ppt
43/121
8/13/2019 Clariion ppt
44/121
LUN Mapping
8/13/2019 Clariion ppt
45/121
LUN Mapping
FC SCSI level allows multiple LUNs at single target
To make it allow we need to map the LUNs in /kernel/drv/sd.conf file and update
the driver
using # update_drvf sd
Example:
name=sd parent=lpfc target=0 lun=1
name=sd parent=lpfc target=0 lun=2
LUN 0
8/13/2019 Clariion ppt
46/121
Access Logix
Access Logix
8/13/2019 Clariion ppt
47/121
Access Logix
What Access Logix is
Why Access Logix is needed Configuring Access Logix
Storage Groups
Configuring Storage Groups
Access Logix
8/13/2019 Clariion ppt
48/121
Access Logix
Access Logix is a licensed software package that runs on each storageprocessor.
SAN switches allow multiple hosts physical access to the same SP ports .
Without Access Logix, all hosts would see all LUNs.
Access logix solve this problem using LUN Masking by creating Storage
groups.
Controls which host have access to which LUNs
Allows multiple hosts to effectively share a CLARiiON array
Initiator Records
8/13/2019 Clariion ppt
49/121
Initiator Records
Initiator records are created during Fibre Channel Login
HBA performs port login to each SP port during initialization
Initiator-Registration records are stored persistently on array
LUNs are masked to all records for a specific host
Access Control Lists maps LUN UIDs to the set of Initiator Records associated with a
host
8/13/2019 Clariion ppt
50/121
Manual and Auto Registration
8/13/2019 Clariion ppt
51/121
Manual and Auto RegistrationAutomatic Registration:
Registration is performed automatically when a HBA is connected to an array
There are two parts to the registration process:
Fibre Channel port login (plogi) where the HBA logs into the SP port
Creates initiator records for each connection
Viewed in Navisphere in Connectivity Status
Host Agent registration where the host agent completes the initiator recordinformation with host information
Manual Registration:
The Group Edit button, on the Connectivity Status main screen, allows manual
registration of a host which is logged in to.
In FC series we need to do manual registration. In CX series the registration is done
automatically if Host agent is installed on Fabric hosts
Storage Groups
8/13/2019 Clariion ppt
52/121
Storage Groups
Managing Storage Groups
Creating Storage Groups
Viewing and changing Storage Group properties
Adding and removing LUNs
Connecting and disconnecting hosts
Destroying Storage Groups
LUN Masking with Access logix
8/13/2019 Clariion ppt
53/121
LUN Masking with Access logix
All LUNs are accessible through all SP
ports
LUN ownership is active/passive
LUNS are assigned to storage Groups
When a host is connected to a storage
group, it has access to all LUNs within
the storage Group
Access Logix
Switch Zoning
8/13/2019 Clariion ppt
54/121
g
g
Zoningdetermines which hosts see what ports on a storage system
Fabric level access control Multiple Hosts may be zoned to share the same ports
Access Logix determines which LUNs are accessible to which host
LUN level access control
Both Zoning and Access Logix are used together
Access Logix Limits
8/13/2019 Clariion ppt
55/121
g
Host may be connected to only one Storage Group per array
If multiple arrays in environment, host may be connected to one Storage Group ineach array
Number of hosts per storage system varies based on the number of connections
Maximum of 256 LUNs in a Storage Group
A Storage Group is local to 1 storage system
Host agent must be running . If not, manually register initiators
User must be authorized to manage Access Logix
8/13/2019 Clariion ppt
56/121
Persistent Binding
8/13/2019 Clariion ppt
57/121
The c# refers to the HBA instance, the t# refers to the target instance(SPs front-end
port) and the d# is the SCSI address assigned to the LUN.
The HBA number and the SCSI address are static but the t# by default is assigned in
the order in which the targets are identified during the configuration process of a
system boot. The order that a target is discovered can be different between reboots.
Persistent binding binds the WWN of a SP port to a t# so that every time the system
boots, the same SP port on the same array will have the same t#.
Persistent Binding
8/13/2019 Clariion ppt
58/121
g
HBA configuration files
/kernel/drv/.conf - lpfc.conf for Emulex
Persistent binding SP port WWPN mapped to controller/target address
500601604004b0c7:lpfc0t2
Disable the auto mapping in lpfc.conf(automap=0)
8/13/2019 Clariion ppt
59/121
Power path
What is Power path
8/13/2019 Clariion ppt
60/121
What is Power path
Host Based SoftwareResides between application and SCSI device driver
Provides Intelligent I/O path management
Transparent to the application
Automatic detection and recovery from host-to-array path failures
8/13/2019 Clariion ppt
61/121
EMC Power path
8/13/2019 Clariion ppt
62/121
EMC Power path
SCSI Device Driver
LUN 0
EMC Power path
EMC POWER 0
EMC Power path
8/13/2019 Clariion ppt
63/121
EMC Power path
Clariion Architecture
CLARiiON supports an Active-Passive architecture
LUNs are owned by a Storage Processor
When LUNs are bound, a default LUN owner is assigned
In the event of a SP or path failure, LUNs can be trespassed to the peer Storage
Processor
Trespass is temporary change in ownership
When the storage system is powered-on, LUN ownership returns to the Default
Owner
Path Failover
8/13/2019 Clariion ppt
64/121
8/13/2019 Clariion ppt
65/121
EMC power path
8/13/2019 Clariion ppt
66/121
p p
Power path utility kit
8/13/2019 Clariion ppt
67/121
PowerPath Utility Kit is intended for hostenvironments where there is only a single
HBA and a need to perform SP failover but
there is no load balancing or HBA failover
Zoning configurationexample:
HBA1 to SPA Port 0
-HBA 1 to SPB port 0
8/13/2019 Clariion ppt
68/121
8/13/2019 Clariion ppt
69/121
8/13/2019 Clariion ppt
70/121
Power Path Administration
8/13/2019 Clariion ppt
71/121
Power path settings on Clariion for each host:
ToolsFailover setup wizard
(Enable Array coman path and set Failover mode as 1 for power path.
Power Path Administration provides both GUI(windows) and CLI (All platforms)
CLI Administration:
1.Install Power path pkg on Hosts
2. Update PATH variable with /etc extension for all powerpath cmds to work
3.Add power path License:# /etc/emcpreg -add License Key
# /etc/emcpreglistto list the installed power path license details
4. To verify that PowerPath devices are configured on the host:
# powermt display dev=all
5. To Configure any missing logical devices.
#powermt config
6. To remove dead paths
#powermt check
7. Powermt restoreTo restore dead paths after have been repaired
8/13/2019 Clariion ppt
72/121
Clariion Applications
Clariion Applications
8/13/2019 Clariion ppt
73/121
Clariion Applications
Snapview Snapshots
Snapview Clones
SAN Copy
Mirror Copy
Snap view over view
8/13/2019 Clariion ppt
74/121
Snap view helps to create point-in-time copies of data
Provide support for consistent on-line backup or data replication
Data copies can be used for purposes other than backup (testing, decision supportScenarios)
Snap view components:
Snapshot
Use pointer-based replication and Copy on First Write technology
Make use of a Reserved LUN Pool to save data chunks
Have three managed objects Snapshot, session, Reserved LUN Pool
Clone
Make full copies of the source LUN
Track changes to source LUN and clones in the Fracture Log
Have three managed objects: Clone Group, Clone, Clone Private LUN
Clones and Snapshots are managed by Navispheare Manager and Navisphere CLI
8/13/2019 Clariion ppt
75/121
8/13/2019 Clariion ppt
76/121
Snapview Snapshots
Snapshot Definition
SnapView Snapshot - an instantaneous frozen virtual copy of a LUN on a storage
8/13/2019 Clariion ppt
77/121
SnapView Snapshot an instantaneous frozen virtual copy of a LUN on a storage
system
Instantaneous
Snapshots are created instantlyno data is copied at creation time
Frozen
Snapshot will not change UNLESS the user writes to it
Original view available by deactivating changed Snapshot
Virtual copy
Not a real LUN - made up of pointers, original and saved blocks
Uses a copy on first write (COFW) mechanism Requires a save area the reserved LUN Pool
Snapview Snapshot
8/13/2019 Clariion ppt
78/121
Snapview SnapshotSnapview Snapshot Components:
Reserver LUN pool
Snapview Snapshot
Snapview Session
Production Host
Backup Host
Source LUN
Copy on First Write (COFW)
Rollback
Snapview Snapshot Components
8/13/2019 Clariion ppt
79/121
p p pReserved LUN pool:
Collection of LUNs to support the pointer-based design of
Snapview .Total number of reserved LUNs is limited.The limit is model-dependent.Snapview Snapshot:
A defined virtual device that is presented to host and enables visibility into running
session.
The snapshot will be defined under a source LUN.
Snapshot can only be assigned to single session.
Snapshot Session:
Process of defining point-in-time designation by invoking copy-on-first-write activity
for updates to the source LUN. Starting a session assigns a reserved LUN to the Source
LUN.
As far as this session is concerned, until a snapshot is activated ,the pointin-time
copy is not visible to any servers.
At any time we can activate a snapshot to this session in order to present the point-in
time image to a host.
Each source LUN can have upto eight sessions
Snapview Snapshot Components
8/13/2019 Clariion ppt
80/121
p p pProduction Host:
Server, where Customer Application execute
Source LUNs are accessed from Production Host
Backup Host:
Host where Backup process occurs
Backup Media attached to Backup Host
Snapshots are accessed from Backup Host
Source LUN: The LUN contains production data on which we want to start a Snap
view Session and optionally activate a snapshot to that session
COFW: The copy on first write mechanism involves saving an original data area fromthe source LUN into a Reserved LUN area when that data block in the active file
system is about to be changed
Rollback: Enables recovery of the source LUN by copying data in the reserved LUN
back to the Source LUN
8/13/2019 Clariion ppt
81/121
8/13/2019 Clariion ppt
82/121
Once a session starts, the SnapView mechanism is tracking changes to the LUN and
reserved LUN Pool space is required
Source LUNS cant share Reserver(Private) LUNS
8/13/2019 Clariion ppt
83/121
8/13/2019 Clariion ppt
84/121
8/13/2019 Clariion ppt
85/121
8/13/2019 Clariion ppt
86/121
8/13/2019 Clariion ppt
87/121
Managing SnapshotsProcedure to Create and Manage Snapshots:
8/13/2019 Clariion ppt
88/121
Procedure to Create and Manage Snapshots:
1. Configure Reserve LUN pool
ReserveLUNpool-configureAdd LUNs for both SPs
2. Create Storage group for prod host and add source LUN
3. Create file system on Source LUN and add data
4. Create Snapshot from LUN0
StoragegroupSourceLUNSnapviewCreate Snapshot
5. Create Snap session from LUN0
StoragegroupSourceLUNSnapview-Start SnapView session
6. Activate Snapshot
Snapview-SnapshotsSelect the snapshotActivate Snapshot (Select a
session for that snapshot)
Managing Snapshots
8/13/2019 Clariion ppt
89/121
7. Create Storage group for Backup host and add snapshot virtual LUN
8. Mount emc device of snap LUN on backup host
9. Verify the Data.
10 Do some modification from Prod Host
11. Umount the prod LUN
12. Perform Roll Back of Snap view session
SnapviewsessionsSelect sessionstart Rollback
13. Remount the prod LUN and observer the old data
8/13/2019 Clariion ppt
90/121
Snapview Clones
SNAP view Clone
8/13/2019 Clariion ppt
91/121
SnapView Clones - Overview
SnapView Clonea full copy of a LUN internal to a storage system.
Clones take time to populate (synchronize)
Clone is independent of the Source once synchronization is complete
2-way synchronization
Clones may be incrementally updated from the source LUN
source LUNs may be incrementally updated from a clone
Clone must be EXACTLY the same size as source LUN
Snapshots and Clone Features
8/13/2019 Clariion ppt
92/121
8/13/2019 Clariion ppt
93/121
8/13/2019 Clariion ppt
94/121
Managing Snapview Clones
8/13/2019 Clariion ppt
95/121
Procedure to create Clones:
1. Prepare Clone private LUN(CPL) and Fractured log
Storage system -
Snapview -
clone feature properties ( add those private LUNS)
Fracture Log:
Located in SP Memory
Bitmap
Tracks modified extents between source LUN and each clone
Allows incremental resynchronizationin either direction
Private LUN for each SP
Must be 128 MB (250,000 blocks) or greater
Used for all clones owned by SP
No clone operations allowed until CPL created
Contains persistent Fracture Logs
Managing Snapview Clones
8/13/2019 Clariion ppt
96/121
2. Create Storage group for a host and add source LUN
3.Create file system for the emc device and add data
3. Create Clone group for a source LUN
Storage SystemSnapviewCreate Clone group
4. Add clone to Clone group (Make sure the Synchronized status)
SnapViewClonesClonegroupadd clone
5. Fracture the CloneSnapViewClonescloneFracture
6. Add clone to the Backup Host storage group.
7.Mount the emc device of the clone on Backup host and check the data.
8.Add some data on clone through backup host.
9. Initiate the Reverse Synchronization and observe the updated data from prod side
8/13/2019 Clariion ppt
97/121
Mirror Copy
Mirror view
8/13/2019 Clariion ppt
98/121
Agenda
Types of Mirror copy
Synchronous ( Mirror view/S)
Asynchronous (Mirror view/A)
How MirrorView make remote copies of LUNs
The required steps in MirrorView administration
Mirror View with Snap View
Mirror Copy overview
8/13/2019 Clariion ppt
99/121
Optional storage system-based software
This product is designed as storage system-based disaster-recovery(DR) solutionsfor mirroring local production data to a remote/disaster recovery site.
Mirrorview/S is a sysnchronous product that mirrors data between local and
remote storage systems
Mirrorview/A is asynchronous product that offers extended-distance replication
based on periodic incremental update model mirrors data
Business requirements determine the structure of DR solution
The buisiness will decide how much data loss is tolerable and how soon the data
must be accessable again in the event of disaster.
Mirror copy overviewIt is a requirement that critical business information always be available. To protect
8/13/2019 Clariion ppt
100/121
It is a requirement that critical business information always be available. To protect
this information it is necessary for a DR plan to be in place to safe guard against any
disaster.
Recovery objects: Recovery objects are service levels that must be met to minimize
the loss of information and revenue in the event of disaster.
The criticality of business application and information defines the recovery objectives.
The terms commonly used to define the recovery objectives are:
Recovery point objective(RPO)
Recovery time objective(RTO)
Recovery point objective: Recovery point objective defines the amount of acceptable data loss in the event of
disaster.
RPO is typically expressed in duration of time.
Some applications may have zero tolerance for loss of data in the event of disaster.
(Example: Financial Applications)
Mirror copy overview
8/13/2019 Clariion ppt
101/121
Recovery time objective(RTO):
RTO is defined as amount of time required to bring the business application back
online after disaster occurs. Mission critical application may be required to be back online in seconds, without
any noticeable impact to the end users.
Replication ModelsReplication solutions can be broadly categorized as synchronous and asynchronous
8/13/2019 Clariion ppt
102/121
Replication solutions can be broadly categorized as synchronous and asynchronous.
Synchronous replication model:
In a synchronous replication model, each server write on the primary side is written
concurrently to the secondary site.
RPO is zero, since the transfer of each I/O to the secondary occurs before
acknowledgement is sent to the server
Data at the secondary site is exactly the same as data at the primary site at the timeof disaster
disaster.
Asynchronous replication modelAsynchronous replication models decouple the remote replication of the I/O from the
8/13/2019 Clariion ppt
103/121
Asynchronous replication models decouple the remote replication of the I/O from the
acknowledgement to the server.
Allows longer distance replication because application write response time is not
dependent on the latency of the link.Periodic updates happens from primary to secondary at user-determined frequency
8/13/2019 Clariion ppt
104/121
Biderection Mirroring
Mirror view Terminology and Data StatesP i I Th LUN i d i d d f hi h li d
8/13/2019 Clariion ppt
105/121
Primary Image: The LUN contains production data and contents of which replicated to
the secondary Image
Secondary Image: A LUN that contains the mirror of the primary image LUN.This LUNmust reside on different clariion than the primary Image.
Fracture: A condition in which the I/O is not mirrored to the secondary image. (Admin
Facture,System Facture)
Promote: The operation by which the administrator changes an images role from
secondary to primary. As part of this operation the previous primary will become
secondary.
Data States
Out of sync - full sync needed
In sync - Primary LUN and Secondary LUN contain identical data
Consistent-The state in which a secondary image is a byte-for-byte duplicate of the
primary image either now or at some point in the past. Transition from this state
can occur to either the synchronizing state or the in-sync state.
Synchronizing - mirror sync operation in progress
MirrorView/S Fracture Log and Write Intent Log
8/13/2019 Clariion ppt
106/121
Fracture Log:
Resident in SP memory, hence volatile
Tracks changed regions on Primary LUN when Secondary is unreachable
When Secondary becomes reachable, Fracture Log is used to resynchronize data
incrementally
Fracture Log is not persistent if Write Intent Log is not used
Write Intent Log:
Optionalallocated per mirror Primary LUN
Persistently stored - uses private LUNs
Used to minimize recovery in the event of failure on Primary storage system
Two LUNs of at least 128 MB each
Comparison between SnapView clones and
MirrorView/Synchronous
8/13/2019 Clariion ppt
107/121
MirrorView/Synchronous
8/13/2019 Clariion ppt
108/121
MirrorView Mirror Creation
Connect storage systems
Physically, by zoning
Logically, by Manage MirrorView Connections dialog
Create Remote Mirror Designate a LUN to be a Primary LUN
Specify a mirror name and a mirror type
Add secondary image(s)
Mirror is created in the inactive state, quickly changes to active
Remote mirror view connection
8/13/2019 Clariion ppt
109/121
Configure and Manage Mirror view/S1 Add source LUN to storage group and create file system on it and store some data
8/13/2019 Clariion ppt
110/121
1.Add source LUN to storage group and create file system on it and store some data.
2.Manage mirror connections.
Storage system --mirrorView--- >Manage mirror connections
3. Allocate write intent log
Storage systemMirrorView-Allocate write intent log
4.Create Remote mirror
Storage System--MirrorView-Create Remote Mirror
5. Add Secondary ImageRemote MirrorsSelect the mirrorAdd Secondary Image
6. Promote the secondary and add the LUN to the any DR storage group and verify
the data.
Mirror view with Snap view SnapView is CLARiiONsstorage-system-based replication software for local
8/13/2019 Clariion ppt
111/121
replicas.
It supports both snapshots, and clones.
When used with MirrorView, SnapView can provide local replicas of primaryand/or secondary images. It allows for secondary access to data at either the
primary location, secondary location, or both, without taking production data
offline for backup , testing etc.
8/13/2019 Clariion ppt
112/121
SAN COPY
EMC SAN COPY
8/13/2019 Clariion ppt
113/121
What is SAN COPY?
SANCOPY is a optional software available on storage sytem. It enable storage system to
copy data at a block level directly across the SAN from one storage system to another
or within a single Clariion system.
SAN COPY can move data from one source to multiple destinations concurrently.
SAN Copy connects through a SAN, and also supports protocols that let you use the IP
WAN to send data over extended distances.
SAN Copy is designed as a multipurpose replication product for data migrations,content distribution, and disaster recovery (DR) .
SAN Copy does not provide the complete end-to-end protection that MirrorView
provides
EMC SAN COPY
8/13/2019 Clariion ppt
114/121
8/13/2019 Clariion ppt
115/121
SAN COPY overview
8/13/2019 Clariion ppt
116/121
Bulk data transfer
CLARiiON to/from CLARiiON, Symmetrix, and other vendor storage
Source LUN may be a point in time copy
SnapView Clone, SnapView Snapshot, Symmetrix point in time copy
SAN-Based data transfer
Offloads host trafficno host-to-host data transfer
Higher performanceless traffic
OS independent
Full or incremental copies
SAN Copy CLARiiON ports must be zoned to attached storage system ports
LUNs must be made available to SAN Copy ports
SAN Copy cannot share an SP port with MirrorView
SAN COPY features and Benefits
8/13/2019 Clariion ppt
117/121
Features:
Multiple Sessions can run concurrently
A Session may have multiple destinations
Implementing SAN Copy Over Extended Distances
SAN Copy has several benefits over host-based replication options:
Performance is optimal because data is moved directly across the SAN. No host software is required for the copy operation because SAN Copy executes on
a CLARiiON storage system.
SAN Copy offers interoperability with many non-CLARiiON storage systems.
SAN Copy Creation Process Flow
8/13/2019 Clariion ppt
118/121
SAN Copy Creation Process FlowOn source storage system:
8/13/2019 Clariion ppt
119/121
g y
1. Prepare source LUN with file system data
2. Create Reserve LUN pool
3.Configuring SAN COPY connections
Storage systemSAN COPYconnectionsRegister each selected SAN Copy
port to ports of the peer storage systems
4. Once the registration process is complete, we can connect the SAN Copy port to a
Storage Group on the peer CLARiiON storage system.
On Destination Storage system:
6.Create a LUN with same size and create a storage group(SANCOPY)
7. Add LUN to the storage group
SAN Copy Creation Process Flow8 Assign SAN COPy connections to a storage group
8/13/2019 Clariion ppt
120/121
8.Assign SAN COPy connections to a storage group
Each SAN Copy port acts like a host initiator and, therefore, can connect to only oneStorage Group at a time in a storage system
Storage groupSANCOPYconnections
On source storage system:
9. Create Session:
Storage systemSANCOPY-Create session
Select source LUN from local storage and destination LUN from other storage and
select it as FULL session
10.Start session
9. Add the destination LUN in any host storage group and verify the data by
mounting
10. Update the source LUN and create a incremental session and verify the data.
8/13/2019 Clariion ppt
121/121
Thank You