This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The information contained in this course is intended only for training. This course contains information and activities that,while beneficial for the purposes of training in a closed, non-production environment, can result in downtime or othersevere consequences in a production environment. This course material is not a technical reference and should not,under any circumstances, be used in production environments. To obtain reference materials, refer to the NetApp productdocumentation that is located at http://now.netapp.com/.
No part of this document covered by copyright may be reproduced in any form or by any means—graphic, electronic, ormechanical, including photocopying, recording, taping, or storage in an electronic retrieval system—without prior writtenpermission of NetApp, Inc.
U.S. GOVERNMENT RIGHTS
Commercial Computer Software. Government users are subject to the NetApp, Inc. standard license agreement andapplicable provisions of the FAR and its supplements.
No other vendor provides this kind of capability. Now customers can provide a common pool of storage
across virtual and physical servers regardless of protocol. Customers can support multiple tiers from the same pool. Customers can unify entire storage infrastructures, including mixed-vendor storage arrays.
You may think that customers must sacrifice performance with this approach, but NetApp systems stand up todemanding performance requirements.
NetApp systems are multitier and multi-use.
Customers can unify mixed-vendor storage arrays.
Unified Storage at a Glance
Storage Arrays
Multitier Multi-Use
NetApp Data ONTAP
Architecture
Storage ControllerNetApp V-Series Systems
NetApp
FAS Systems
Data Abstraction Layer
Logical Pool of Storage
Disk-to-
Disk
Backup
Disaster
Recovery
Remote
Applications
and Servers
VM1 VM2 VM3 VM4
FC, iSCSI,NFS, CIFS,
and FCoEEnterprise Network
NetApp Confidential 4
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NetApp storage solutions are based on Data ONTAP architecture, a highly optimized, scalable, and flexibleOS that:
Starts with a storage-virtualization engine that provides an end-to-end solution in a single integrated platform. The capabilities that are built into Data ONTAP software specifically address the challenges
that are shown on the previous slide. Provides the ability to scale infrastructure (small, medium, or large) over time and across heterogeneous
physical components
Allows management of storage from an application point of view, which results in the ability to delegateand automate tasks
The Data ONTAP Operating System
Application-Centric Storage Manage data from applications:
Application administrator self-
management within an established
storage policy
Application synchronizationUse a single storage-virtualization engine:
Management of storage pools instead of
hardware
The heart of virtualized data
management
Simplify elements to be managed:
Choices for capacity, performance, and
cost
Support for SAN and network-attached
storage ( NAS) protocols
Architecture for availability and simplicity
NetApp Confidential 5
Application-Centric Data Management
HP EMC HDS
FAS2000 FAS3000 FAS6000 V-Series Systems
A Multiprotocol, Unified Platform
Data ONTAP Architecture
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The virtualized pool of storage is fronted by a three-layer approach:
Layer 1( Data Layout): The Write Anywhere File Layout (WAFL) file system provides for the highestwrite and read efficiency, which results in the lowest latency. Because it is a write-in-place environment,
data writes do not exert any nonessential spinning of drives, thereby increasing drive longevity. Layer 2( Protocol): By allowing SAN and network-attached storage(NAS) over multiple protocols, the
Data ONTAP platform affords the highest level of flexibility and usability. This unifying constructenables a truly simple-to-manage environment for all workloads.
Layer 3( Services): Because data resides in the storage pool, which enables the highest level of efficiency
across the dataset and other functionality that enables the application layer to achieve or exceedobjectives, Data ONTAP software is the transformational platform in the market. Because of “virtual”constructs at all layers, Data ONTAP software provides the most flexible, scalable, and efficient platformthat enables customers to address the changing needs of today and to address tomorrow’s challenges.
Data ONTAP Layers
Services Layer
Protocol Layer
Data Layout Layer WAFL File System(Thin Provisioning)
Protocols – NFS, CIFS, FC, iSCSI
and Ethernet
Multi-
Tenancy
Storage
Efficiency
Storage
Acceleration
Data
Protection
Storage
Quality of
Service
Storage Pool
A Transformational Platform
NetApp Confidential 6
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This discussion starts with the core technologies that are listed in the middle of the slide. The Snapshot and
FlexVol technologies have their own sections, because they are so important for you to understand and beable to explain to potential customers. These core technologies are what NetApp does, what NetApp is about,and why NetApp technologies can work the way that they do:
WAFL core technology
Snapshot technology RAID 4 or RAID-DP technology NVRAM operations
Aggregates and volumes
You will certainly talk with most customers about RAID technology and how NetApp RAID protection
works, but getting down into the WAFL file system is usually not necessary. However, you must understandthe WAFL file system whether you talk to customers about it or not. The system is integral to how NetAppstorage products work.
At the core of the Data ONTAP operating system is the WAFL file system , the NetApp proprietary software
that manages the placement and protection of storage data. Integrated with the WAFL system is NetAppRAID technology, which includes single and double-parity disk protection. NetApp RAID technology is
proprietary and fully integrated with the data-management and data-placement layers, which allows efficientdata placement and high-performance data paths.
Closely integrated with NetApp RAID technology is the aggregate, which forms a storage pool byconcatenating RAID groups. The aggregate controls data-placement and space-management activities.
The FlexVol volume is logically assigned to an aggregate but is not statically mapped to it. This dynamic
mapping relationship between the aggregate layer and the FlexVol layer is integral to the innovative storagefeatures of the Data ONTAP architecture.
The WAFL file system includes the necessary file and directory mechanisms to support file-based storage andthe read and write mechanisms to support block storage or LUNs.
Note that the protocol-access layer is above the data-placement layer of the WAFL file system . This allowsall of the data to be managed effectively on disk, regardless of how the data is accessed by the host. This levelof storage virtualization offers significant advantages over other architectures that have tight association
between the network protocol and data.
The WAFLFile System
The WAFL file system is highly data-aware and enables the storage
system to determine the most efficient data placement on disk.
Data is intelligently written in batches to available free space in the
aggregate without changing existing blocks.
The aggregate can reclaim free blocks from one flexible volume
(FlexVol volume) for allocation to another.
Data objects can be accessed through the NFS, CIFS, FC, FCoE, or
iSCSI protocol.
NetApp Confidential 9
The WAFL
File System
Protocol
Layer
NFS Protocol CIFS ProtocolFC, FCoE, and
iSCSI
NFS Semantics CIFS Semantics LUN Semantics
File Mechanism Directory Mechanism Read and Write
FlexVol Volume
NetApp RAID Technology
Aggregate
Data
ONTAP
Architecture
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
DATA ONTAP COMPONENTS: THE WAFL FILE SYSTEM VERSUS “TRADITIONAL”
FILE SYSTEMS
Write Anywhere File Layout (WAFL ) is the NetApp file system. It is the file-system layer of the DataONTAP operating system , but what does the name “WAFL” mean? Sometimes potential customers areconfused about the meaning. Sometimes this confusion is planted by NetApp competitors — an insidious sales
technique known as FUD — fear, uncertainty, and doubt. Sometimes competitors suggest that the WAFLsystem does not protect data that is stored on disk because the WAFL system stores the data on disk just
“anywhere.” However, that is not what “WAFL” means. In fact, it is just the opposite. The important point isthat unlike the majority of file systems that require metadata to be recorded to a particular physical locationon the disk, the WAFL file system can write metadata anywhere on the disk.
From a performance point of view, the WAFL system attempts to avoid the disk head having to write data inone location and then having to move to a special portion of the disk to update the inodes — the metadata —
then move back to write more data, then move again to update inodes, and so on across the physical diskmedium. Head seeks happen quickly, but on server-class systems, you have thousands of disk accesseshappening per second. This adds up quickly and greatly impacts the performance of the system, particularlyon write operations. The WAFL system does not have that handicap and writes the metadata in line with therest of the data. “Write anywhere” refers to the file system’s ability to write any class of data at any location
on the disk; in other words, it can choose where to put the data.
The basic goal of the WAFL system is to write to the “first best” available location. “First” refers to theclosest available block. “Best” refers to the same address block on all disks, that is, a complete stripe. The
first best available is always a complete stripe across an entire RAID group that utilizes the least amount ofhead movement to access. That is arguably the most important criterion for choosing where the WAFL
system locates data on a disk. That is what the term “write anywhere” refers to: the location of the metadata.
The Data ONTAP operating system controls where everything goes on the disks, so it can decide on theoptimal location for data and metadata. This fact has significant ramifications for the way that the Data
ONTAP operating system does everything, particularly in the operation of RAID and Snapshot technology.
Data ONTAP Components: The WAFL File
System Versus “Traditional” File Systems
WAFL File System Traditional File Systems
File data location Anywhere on disk Fixed location (LBA)
Metadata location Anywhere on disk (except
root inode)
Dedicated regions
Updates to existing data andmetadata?
Put in unallocated blocks(originals intact)
Overwrite existing data
Snapshot copies and
versions?
By design Requires extra copy on write
File-system consistency? Guaranteed by design Requires careful ordering of
all writes
Crash recovery? Reboot, ready to go Slow, complicated fsck
Interaction with RAID
technology
Can write full stripes, utilizing
bandwidth
Must seek for all updates
NetApp Confidential 11
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The unique features of the WAFL file system offer many benefits.
The “write anywhere” function of the WAFL system intelligently writes new data to available free space ondisk without having to move or modify the original data. Additionally, WAFL does not require manual
tweaking or tuning to optimize data-placement behavior.
The WAFL system leverages a modern pointer architecture for data placement. Instead of statically mapping
logical blocks to physical blocks at the time that a LUN is created, the WAFL system dynamically mapslogical blocks to physical blocks when data is written to disk. The ability to provision a LUN or FlexVolvolume independently of the available disk capacity is referred to as thin provisioning. It allows IT
administrators to purchase disk capacity as needed, rather than requiring a full up-front investment.
Another feature of the WAFL system relates to the use of aggregates. Aggregates form a storage pool from
RAID groups and are responsible for the assignment of logical data blocks to physical blocks on disk.Aggregates can be dynamically expanded by adding more RAID groups. And because the logical blocks in a
NetApp LUN do not occupy a predefined space on disk, expanding an aggregate doesn’t require data
movement to restripe LUNs across the added disks. An aggregate is also aware of the data that it stores ondisk. When data is deleted from a storage volume, such as a LUN, the aggregate knows that the free data
blocks can be reclaimed and assigned to another volume or LUN as needed. The added value of the aggregateoffers improved storage efficiency over legacy technologies.
NetApp FlexVol technology offers many advantages. The FlexVol volume is a dynamic storage object and is
not statically assigned physical blocks at the time of creation. As a result, the FlexVol volume can be any size,even larger than the aggregate. And the volume can be dynamically resized, larger or smaller, without data
loss. These advantages are also available to LUNs.
NetApp Data Layout
Feature Benefit
WAFL architecture New data is intelligently written to available
free space.
The WAFL file system leverages a pointer
file-system architecture.
This facilitates dynamic storage virtualization,
thin provisioning, and more.
An aggregate is statically mapped to RAID
groups or physical blocks.
Aggregates provide an intelligent storage pool
to manage block mapping.
FlexVol volumes are not statically assigned
physical blocks at the time of creation.
A logical volume can be nearly any size without
full up-front investment in physical capacity.
Data is logged into nonvolatile memory and
then written to disk en masse.
Full stripe writes are used to optimize the write
pattern across the aggregate and improve
performance.
NetApp Confidential 12
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Finally, NetApp uses intelligent caching and write patterns to improve write performance. Data from the hostis logged into nonvolatile memory and then written to disk en masse. Writes to disk are optimized across alldrives in the aggregate and contribute to improved data access.
The advantages of the WAFL system are demonstrated with three NetApp technologies: RAID-DP, Snapshot,
and, FlexClone.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NetApp conducted a survey of its internal system engineers asking “what are the most important technical
features to the customers that you work with?”
The items highlighted are the ones that depend on Snapshot technology. Snapshot technology itself came in
second, right after the tie for first place between RAID-DP® and FlexClone software. SnapRestore®,SnapLock®, SnapMirror®, SnapVault®, and SnapManager® were all considered important technical features
by at least 69 percent of the NetApp customers with whom those SEs worked.
Snapshot copies are very important to all NetApp features. Snapshot copies are generally thought of in themarketplace as a way to get back to a previous version of the data. That use of Snapshot copies is fairly
obvious. NetApp technology also leverages Snapshot technology for replication and compliance.
NetApp Greatest Hits
What are the most important features to the customer?
NetApp Confidential 14
NetApp Feature % NetApp Feature %
RAID-DP® 95% SnapMirror ®
81%
FlexClone™
95% SnapVault®
76%Snapshot
™technology 94% FlexVol performance 73%
Single OS 89% SnapManager ®
69%
SnapRestore®
89% Data ONTAP benefits 68%
WAFL®
integration 86% FlexVol provisioning 68%
Multi-protocol 86% V-Series 64%
Data ONTAP simplicity 85% FlexVol priorities 61%
WAFL file system 85% SnapDrive®
software 60%
FlexVol virtualization 85% LockVault™
60%
iSCSI leadership 85% NAS Leadership 59%
SnapLock
®
83% Forced disk consistency 40%
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This presentation was originally given by Dave Hitz, one of the NetApp founders and Executive Vice
President of NetApp, Inc. He did this presentation for the 2005 Fall Classic. It is a good description ofSnapshot technology and how our competitors’ snapshot technologies work.
What Is a NetApp Snapshot Copy?
A Snapshot copy is a locally retained, point-in-time image of data. NetApp Snapshot technology is a feature
of the WAFL storage-virtualization technology that is a part of the Data ONTAP microkernel that ships withevery NetApp storage system. A NetApp Snapshot is a “frozen,” read-only view of a WAFL volume that
provides easy access to old versions of files, directory hierarchies, and LUNs.
The high performance of the NetApp Snapshot technology makes it highly scalable. A NetApp Snapshot copytakes only a few seconds to create — typically less than one second, regardless of the size of the volume or the
level of activity on the NetApp storage system. After a Snapshot copy has been created, changes to dataobjects are reflected in updates to the current version of the objects, as if Snapshot copies did not exist.Meanwhile, the Snapshot version of the data remains completely stable. A NetApp Snapshot copy incurs no
performance overhead; users can comfortably store up to 255 Snapshot copies per WAFL volume, all ofwhich are accessible as read-only and online versions of the data.
How does NetApp Snapshot technology work?
Data ONTAP architecture starts in the same way as random access mediums with pointers to physicallocations, the same as USB drives, or thumb drives, and any other type of disks, such as floppy disks. When
Data ONTAP software creates a Snapshot copy, it preserves the inode map as it is at that point in time andthen continues to make changes to the inode map on the active file system. Data ONTAP software keeps theold version of the inode map. No data movement occurs at the time that the Snapshot copy is created.
NetApp Snapshot Technology (1 of 3)
Create Snapshot copy 1:
No data movement
Copy pointers only
NetApp Confidential 15
A
B
C
A
B
C
Snapshot
Copy 1
Blocks in
LUN or File
Blocks
on the Disk
A
B
C
A
B
C
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
When Data ONTAP software writes changes to disk, the changed version of block B gets written to a new
location, B1 in this example. That enables the file system to avoid all of the parity update changes that would be required if the new data were written to the original location. If Data ONTAP software updated the same block, it would have to perform multiple parity reads to be able to update both parity drives. The WAFL filesystem writes the changed block to a new location, again writing complete stripes and not moving or
changing the original data blocks.
When the file system creates the next Snapshot copy, the new Snapshot copy points only to the unchanged blocks A and C and to block B1, the new location for the changed contents of block B. That is all. Data
ONTAP software does not move any data; it keeps building on the original active file system. It is extremelysimple and efficient, and because it is so simple, it is good for disk utilization. The only extra blocks that areused when changes are made are those that are needed for the new or updated blocks.
NetApp Snapshot Technology (2 of 3)
Create Snapshot copy 1.
Continue writing data.
Create Snapshot copy 2:
– No data movement
– Copy pointers only
NetApp Confidential 16
A
B1
C
Blocks
on the Disk
A
B
C
B
A
C
B1
B1
A
B
C
Snapshot
Copy 1
Blocks in
LUN or File
Snapshot
Copy 2
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Snapshot copies have excellent performance characteristics. No extra I/O operations are required.
Functionally, the system can realistically provide an unlimited number of Snapshot copies. The hard limit is255 Snapshot copies per volume online, and in most production environments, that is more than are used.Two dozen active Snapshot copies at a time are the most that you find in production environments, eventhough many more can be utilized if needed.
Secondary archival environments certainly use many more Snapshot copies. Now, by using FlexClonetechnology, you can literally take an unlimited number of Snapshot copies of a volume. A user can take up to254 Snapshot copies and then, on the last Snapshot copy, create a FlexClone volume clone. Then the user can
take another 254 and clone it again, take another 254, and so on. So today, we have unlimited Snapshotcopies.
NetApp Snapshot Technology (3 of 3)
Create Snapshot copy 1.
Continue writing data.
Create Snapshot 2.
Continue writing data
Create Snapshot copy 3.
Simplicity of model:
– Best disk utilization
– Fastest performance
NetApp Confidential 17
A
B1
C2
A
B1
C
Snapshot
Copy 2
Blocks
on the Disk
A
C
A
B
C
B1
B1
C2
C2
A
B
C
Snapshot
Copy 1
Blocks in
LUN or File
Snapshot
Copy 3
B1
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This slide depicts the Data ONTAP Snapshot performance, looking at I/O measured before, during, and after
a Snapshot copy is created while the system is under a 50/50 4K read/write OLTP workload. You can see inthis chart, when the Snapshot copy is created, a minor change in performance is experienced but as soon asthe Snapshot is created, the performance resumes to the system’s previous high I/O levels.
Data ONTAP Snapshot Performance
Snapshot copies:
A point-in-time copy is
created in a few seconds.
No performance penaltyoccurs.
TPC-C is published with
five active Snapshot copies.
A Snapshot
copy is created.
NetApp Confidential 18
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
When data is changed, the snapshot procedure begins to differ from how Data ONTAP software does
snapshots. When data changes in a storage system from any of the competitors to NetApp, the file system:
Must first read the original data block
Then writes its contents to the copy-out region Updates pointers
Updates the contents of the block on disk back at the block’s original location
So, the new data is written to the original location. In addition, after the file system updates the originallocation, it must update the parity bits on any existing RAID drives.
To accomplish each update, file systems from the competitors to NetApp must do a:
Read of the old data
Write of the old data to its new location Write of the new data to the old location
This is a total of one read and two writes to service one update request: three times the system overhead.
A Competitor’s Snapshot (2 of 2)
Create snapshot 1.
Continue writing data:
– Block changes.
– Read old block; write to
copy-out region.
– Update snap pointer to
copy-out region.
– Update block on disk.
One write requires:
– One read (old data)
– One write (old data)
– One write (new data)
NetApp Confidential 21
Copy
Out 1
Blocks in
LUN or File
Blocks
on the Disk
B
A
C
A
C
BB
A
B
CSnapshot Copy 1
B1
Copy
Out 1
B1
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Because the activity occurs on first write, competitors’ performance slowly ramps back on these systems. If
the file system keeps updating block C, it does not have to do any extra work. Because it has stored the oldversion, it can now write over the original location without the need to first copy the data to a copy-out area. Itis the first write on any block that is included in a snapshot that requires the extra overhead.
What typically happens on competitors’ systems is a cyclical change in performance. For example, performance is at an expected level, then a snapshot is created, performance drops, and then performanceslowly comes back to an acceptable level. When another snapshot is created, performance drops again andslowly returns, then drops again. So, although many NetApp competitors say that they can create thousands of
snapshots, best practices generally show that administrators should limit the number of snapshots of a givenset of data to anywhere from four to eight (it varies with each vendor) because of potential performanceimpact and the difficulty of managing these copy-out areas.
When the snapshot feature on competitors’ systems is used regularly, the systems start to get multiple datacopies that are stored. The more snapshots that are created, the more likely the systems are to have multiple
copies of data. Administrators of these systems have questions to consider, such as:
How big should this copy-out region be made? (The answer depends on the delta rate.)
What is the delta rate?
If the administrator does not make the copy-out region large enough, the snapshot capability breaks.The file system cannot keep the version of the old data and loses that snapshot. Of course, if the copy-out area
is too big, it is wasted space. Determining what size these copy-out areas should be is an art and must be fine-tuned over time.
Snapshot Comparison
The NetApp approach
Minimum overhead, which
guarantees disk-space
efficiency No data movement:
– Guarantees disk
performance
– Enables more Snapshot
copies
Space on disk is better.
Performance is better.
Number of Snapshot copies
is better.
NetApp Confidential 22
UsedDisk
Space
Side-by-Side Comparison After Two Snapshots
NetApp Others
A
B1
B
C
C2
A
B
C
B1
C2
C
C2
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Assume that after a NetApp Snapshot copy is created, the storage system develops a logically bad block for
some reason. If the block is physically bad, RAID takes care of it, and it never comes into the Snapshot picture. So, somehow, a bad block exists — C2, in this example — that was accidentally deleted.
Using Snapshot Copies to Restore Data
Block C2 is bad.
NetApp Confidential 24
C2
A
B1
Snap-
shot
Copy 3
A
B1
C
Snap-
shot
Copy 2
A
B
A
B
C
B1
B1
C2
C2
A
B
C
Snap-
shot
Copy 1
C2
C2
C2
Blocks
on the Disk
Blocks in
LUN or File
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Data ONTAP software lets users self-restore from the .snapshot directory in NAS environments.
For example, if a user’s home directory— drive H, for example — is hosted on a NetApp storage system, theuser can see all available Snapshot copies by displaying the .snapshot directory on drive H in an CIFS
environment or the ~snapshot directory in a CIFS environment. The daily Snapshot copies occur at midnightevery night. The hourly backups occur on a schedule that is determined by the administrator. On the back end,the system only stores changed blocks. Anything the user has not touched for a while is not duplicated foreach Snapshot copy. Every Snapshot copy uses the same unchanged blocks.
If something happens to one of the user’s files— perhaps it was deleted or written over by accident — the user
can drag the data out of the Snapshot directory and restore it back to the user’s home directory. When a userdoes that, the user is copying data from a Snapshot copy and creating new blocks in the active file system.
NOTE: The system administrator can turn this feature off.
Using Snapshot Copies to Restore Data
Block C2 is bad.
Let users self-restore
from the .snapshot
directory in NAS(.snapshot in NFS,
previous versions in
Windows) environments.
NetApp Confidential 25
A
B1
Snap-
shot
Copy 3
A
B1
C
Snap-
shot
Copy 2
A
B
C
A
B
C
B1
B1
C2
C2
C2
A
B
C
Snap-
shot
Copy 1
C2
C2
C2
Blocks
on the Disk
Blocks in
LUN or File
.snapshot
Directory
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The process that is described on the previous slide is fine for everyday home directories with files such as
Word documents, PowerPoint presentations, and so on. Of course, if you want to restore a database that is 50GB, that is probably not what you have in mind with Snapshot copies. So, the other way to restore data from aSnapshot copy uses the SnapRestore feature. SnapRestore technology does not copy files; it simply moves
pointers from the files that are found in the good Snapshot copy to the active file system. The pointers that are
stored in the Snapshot copy are promoted to become active file system pointers.
The system tracks the links to blocks on the WAFL system, and when no more links to a block exist, the block is available for overwrite and considered free space.
Because SnapRestore technology is an all-pointer operation, it is quick. No data update occurs, nothing ismoved, and the file system potentially frees blocks that were only used in the later version of the file system.
SnapRestore operations generally happen in less than a second. They are not literally instantaneous but practically instantaneous.
Imagine what restoring looks like on a competitor’s system. The competitor’s file system moves the blocks
somewhere else, so to return to a previous version, all of the blocks must be copied back to where they were before. Some systems have ways to make that look live. For example, as the read request comes to a
particular block, the file system may read this block while it moves stuff in the background. One way oranother, the competitor must get all of those blocks back to their previous locations.
When restoring from a Snapshot copy with the SnapRestore command, moves pointers from the good
Snapshot copy to the file system. A single-file SnapRestore operation may require a few seconds or a fewminutes to restore.
Using Snapshot Copies to Restore Data
Block C2 is bad.
Let users self-restore
from the .snapshot
directory in NASenvironments.
Restore from the
Snapshot copy with
SnapRestore technology.
A single-file
SnapRestore instance
allows restoration of a
single file from a
Snapshot copy.
NetApp Confidential 26
A
B1
C
SnapRestore Technology
Snap-
shot
Copy 3
A
B1
C
Snap-
shot
Copy 2
B
C
A
B
C
B1
C2
A
B
C
Snap-
shot
Copy 1
A
B1
C2C2
A
B1
C2
C2
Blocks
on the Disk
Blocks in
LUN or File
A
B1
C
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The WAFL file system never holds data for longer than 10 seconds before it establishes a consistency point
(CP). CP operations are “atomic” operations; in other words, they must be committed fully or they arerecommitted. This is why they are called CPs.
At least every 10 seconds, the WAFL system takes the content of NVRAM and commits it to disk. When awrite request is committed to a block on disk, the WAFL system clears it from the journal. On a system that islightly loaded, an administrator can see the 10-second CPs happen: Every 10 seconds, the lights cascadeacross the system. Most systems run with a heavier load than that, and the CPs happen every second, everytwo seconds, or every four seconds, depending on the system load.
A question that frequently arises is: “Is NVRAM a performance bottleneck?” No, it is not. The response timeof RAM and NVRAM is measured in microseconds. Disk response times are always in milliseconds, and it
takes a few milliseconds for a disk to respond to an I/O. Because disks are radically slower than any othercomponent on the system, such as the CPU or RAM, disks are always the performance bottleneck of anystorage system . When a system is committing back-to- back CPs, that’s because the disks are taking writes as
fast as they can. That is a platform limit for that system. If that platform limit is reached, the option is tospread the traffic across more heads or upgrade the head to a system with greater capacity. That is a disk
limitation; the disks are emptying NVRAM as quickly as possible. NVRAM could function faster if the diskscould keep up.
NVRAM is logically divided into two halves so that as one half is emptied the incoming requests fill the otherhalf. They go back and forth on that system. When the WAFL system fills one half of NVRAM, the WAFLsystem forces a CP to happen, and it writes the contents of that half of NVRAM to the storage media. A fully
loaded system does back-to-back CPs, so it fills and refills both halves of the NVRAM.
NVRAM Operation (3 of 4)
Client
Activities that involve the
operation consume main
memory.
Up to 10 seconds may elapse
between CPs, during which
many other operations arrive
(not shown).
Storage System
The organized data from the
operations is written to disk in a
process that is called
consistency-point (CP)
processing.
NVRAM is zeroed.
NIC Main
Memory
NVR AM
BATT
NICMain
Memory
NetApp Confidential 33
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
One advantage that NetApp products gain from the use of NVRAM is the flexibility to use RAID more
efficiently. RAID 4 is the NetApp base RAID type that has been used since the founding of the company.Because of the performance issues that result from its implementation, NetApp competitors do not use RAID4 . The competitors may be capable of handling it, but in most cases they don’t use it. Why?
For NetApp competitors, the parity drive is what is wrong with RAID 4. RAID 4 uses a single drive to write parity. When you have a single drive that is dedicated to parity, NetApp competitors write down each requestas it comes in and write requests to disparate locations. All of those updates happen randomly on a data disk,which means that the updates also require a parity change. This creates a parity drive that is exponentially
busier than each data drive. The parity drive gets hot (figuratively) and slows the entire system. The paritydrive is a bottleneck.
So why does NetApp use RAID 4? NetApp can use RAID 4 because the WAFL system controls where to putthe data on disk. It does the parity calculations in memory rather than having to read in extra data and parity
bits. The WAFL system can lay out complete stripes on disk and writes to the parity drive no more and no
less often than all the other drives in the array.
NVRAM Operation (4 of 4)
Client
Activities involving the
operation consume main
memory
Up to 10 seconds can elapse
between CPs, during which
many other operations arrive
(not shown)
Storage System
The organized data from the
operations is written to disk in a
process called a Consistency
Point, or CP
NVRAM is zeroed
Main
Memory
Main
Memory
NVR AM
BATT
NIC NIC
NetApp Confidential 34
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NVRAM AND HIGH AVAILABILITY CONFIGURATIONS (1 OF 2)
Here you can see a diagram of a basic cluster configuration. There are two controllers with the NVRAM is
being mirrored on each system. The two colors showing on each controller are pointing out that we fact Wehave a mirror, orange, on both sides, and a mirror, blue, on both sides. Our primary connection to one shelf isthe secondary connection to the other controller’s shelf .
The cluster connection between them is InfiniBand on most of the platforms, with some exceptions such asthe FAS2000s. 10-Gb InfiniBand transports the heartbeat signal as well as NVRAM mirroring between thesystems. Both systems are dealing with their own traffic, data read and writes.
Hardware color and the color of the wire indicates disk ownership, or which controller controls which disk.
(technical detail: Cluster interconnect for the FAS270C is by way of dedicated GbE, internal to the FAS270C,inaccessible to all other "nodes" that might tend to rob performance from it or serve as possible source of data
corruption.)
NVRAM and High Availability
Configurations (1 of 2)
Clients and Hosts Clients and Hosts
Controller A Controller BController Interconnect
- Heartbeat
- NVRAM Mirroring
NVRAM NVRAM
NetApp Confidential 35
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Both controllers can actively accept data, and when one fails, all of the traffic moves to the surviving
controller. Most systems utilize software disk ownership, so an administrator can assign individual disks toone controller or another. This provides much more flexibility, but the important thing to remember is thatyou must assign a disk or it cannot be used for any purpose.
If you forget to assign a disk, it cannot be used by either controller. Even if a controller is in degraded modeand needs a spare to start a rebuild, it does not take an unassigned disk. By using either software ownership orhardware ownership, when a failover occurs, all of the ownership moves to the surviving controller and thatcontroller takes all of the traffic.
Some customers choose to configure their controllers as though they are active-passive. They do not put anytraffic on the second controller, so that when a failover occurs, it has exactly the same performance profile as
when it runs on the other system.
Some customers choose to load them only to about 40%, so, when one fails over, the other is at about 80%utilized, but it still performs well. Other customers choose to load them normally and accept that during a
failover, the customers have decreased performance. It depends on the goals of the clients and what they hoston their systems. Any of these scenarios is a potential design option.
The total functioning NVRAM with a cluster is the same as the total functioning NVRAM as a single system.
As mentioned earlier, NVRAM is never a bottleneck. NVRAM is lightening fast, and it is usually “diskaccess” that slows the system. As long as no primary traffic goes across the interconnect, a failover does not
create performance issues.
NVRAM and Dual Controller
Configurations (2 of 2)
Clients/Hosts Clients and Hosts
Controller A Controller BController Interconnect
- Heartbeat
- NVRAM Mirroring
NVRAMNVRAM
NetApp Confidential 36
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NETAPP RAID-DP TECHNOLOGY: THE NEW STANDARD FOR DATA RELIABILITY
RAID-DP technology is the new standard and benchmark for data reliability. With the introduction of higher
drive capacities comes the increased probability of downtime for a much larger set of data, and customersface the need for better and more cost-effective data protection.
RAID-DP technology addresses these needs better than any other RAID method because of the way the datais stripped across the drives.
NetApp RAID-DP Technology:
The New Standard for Data Reliability
Solution: Greater Availability with RAID-DP®
Same protection as RAID 1 (mirroring)
Same cost, performance, and ease of use
Business Implications
71% more usable capacity than competitive offerings
Drive failures won’t impact data availability
Technical Benefits
More secure than RAID 5
More reliable than mirroring for double-disk failure
14% parity overhead versus 50% overhead with
mirroring (SATA)
NetApp Confidential 38
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Inside NetApp, the WAFL file system and NVRAM process is described as “cheating at Tetris.” The object
of Tetris is to create full lines of blocks so that they get cleared out. That is what the WAFL system does. It is“cheating,” because it involves caching blocks, looking at them, and laying them out before having to writethem to disks.
The WAFL system can cheat when laying out blocks because of the journaling that occurs in NVRAM andthe RAM buffer. It writes complete stripes across the array so that the traffic on the parity drive is the same asthe traffic on the data drives. The disks all get the same number of writes across the entire RAID group. Thisis why NetApp can use RAID 4 and not have the performance problem of the parity drive getting hot and
overloaded in either RAID 4 or RAID-DP technology.
RAID 4 has always been available in Data ONTAP software. One of the advantages of RAID 4 is that it
allows the administrator to add data drives to RAID groups. Adding a data disk that contains all zero bits hasno impact on the parity disk. With RAID 4, this allows the addition of data drives to RAID groups withouthaving to touch any of the data or recalculate any parity.
At least four disks should be added at a time to a system that has implemented RAID 4 protection. Withaggregates, that is usually not an issue. Most NetApp customers usually add an entire RAID group at a time to
an aggregate to increase capacity.
The next item is that most companies with enterprise environments want to be able to survive two data diskfailures. So, how do you do that? That is where RAID-DP technology comes into the picture.
Data ONTAP Components:
Data Layout with RAID 4
Uses a Tetris-like write
Tries to fill stripes
Recalculates parity
NetApp Confidential 39
RAIDStripe
WriteChain
Parity Drive
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The DP in the RAID-DP name stands for “double parity” or “dual parity.” The OS materials refer to RAID-
DP technology as dual parity, because RAID-DP technology has two parity disks. Engineering and technicaldocuments may call it “diagonal parity,” because that more literally describes how it works. Instead ofcalculating the parity bit across horizontal stripes on the disks, RAID-DP technology calculates the paritydiagonally down blocks as depicted in this slide. The result is that RAID-DP technology can survive the
failure of two data disks simultaneously and maintain live read-write access to that data while the system
reconstructs the contents of the failed disks.
Recently the Storage Network Industry Association (SNIA) updated its definition of RAID 6, so NetApp can
now call RAID-DP technology an implementation of RAID 6. SEs who are standards-oriented call theimplementation RAID 6; SEs who are NetApp innovation-oriented call it RAID-DP technology.
HP and several other storage vendors can implement RAID 6. How many customer implementations useRAID 6 from vendors other than NetApp? The answer is few. The reason is performance impact. If 100% isnormal performance on a storage system from a NetApp competitor, when a client turns on RAID 6
protection, the performance drops to about 60% or, in many cases, 40% of normal performance. You canimagine why: File systems that are constrained to doing updates to certain physical locations on a disk
generate much additional read traffic and disk head movement when RAID 6 is turned on.
The WAFL file system also must do extra work in RAID-DP technology, calculating parity reads horizontally
across all of the data drives to produce the normal parity updates and performing parity reads diagonallyacross all of the data drives. It seems at first glance that RAID-DP technology creates a massive cascade ofI/O activity to update both kinds of parity with each write to disk, but because the majority of these
calculations are done in RAM, with NetApp, the I/O impact is kept to a minimum.
Data ONTAP Components: RAID 4 Parity
RAID-4
protects against
any single-disk failure.
D
3
D
1
D
2
D
3
2
P
9
7
NetApp Confidential 40
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The WAFL file system not only tries to write complete stripes of data across disks, it always tries to write 16complete stripes at a time. When the WAFL system writes 16 stripes simultaneously, it can do both thenormal horizontal parity calculations and the diagonal parity calculations in RAM before committing any datato disk. The WAFL system wraps the diagonal calculations around this set of stripes and has all of the data
laid out in memory with the parity and the diagonal parity calculated before putting the data on disk. Thisapproach means that no extra read traffic or head movement slows storage I/O. The result is excellent
performance even with RAID-DP technology enabled.
In terms of throughput and latency, the performance is the same for RAID-DP technology as it is for RAID 4.
RAID-DP technology does introduce a 1% to 2% increase in storage controller CPU usage, because extracalculations are done in RAM before laying the data down on the disk, so performance impact is minimal.The bottom line is that no performance reason exists for NetApp customers not to run RAID-DP technology.
The next point to clarify is how much storage overhead this creates, because the system dedicates another disk
to parity. In other words, will a RAID-DP system require use of more disks than a comparable RAID 4system does? RAID-DP technology requires the same number of disks as RAID 4 does. For RAID 4, one
parity disk for every seven data disks should exist . By contrast, for RAID-DP technology, 2 parity disks forevery 14 data disks should exist . The net result is exactly the same ratio of parity disks to data disks.
However, the resulting protection that RAID-DP technology provides is much greater, because RAID-DPtechnology can survive two simultaneous disk failures.
Another important question that is commonly asked is, “If performance from other RAID 6 implementat ions
is so bad , how do NetApp competitors get multidisk failure protection?” The answer is that a majority oftime, the competitors’ implementations create full mirrors. That means that eight disks are protected by eight
disks, which effectively cuts usable disk space by 50%. This is a great selling point for NetApp; our usablespace in a double-disk protection scenario is far greater than that of our competition. When discussing thisissue with customers, be sure to focus on the double-disk failure protection.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Solidify RAID-DP technology as the foundation of data protection.
This table compares RAID-DP technology with RAID 5 and RAID 10.
While RAID 5 used to be considered adequate, protection is provided only for single-disk failures. With the
sheer number of drives in use today, combined with drive manufacturer issues around similar life span ondrives that are manufactured together, it is now a mathematical certainty that data centers must be prepared
for double-disk failure scenarios. This requirement rules out RAID 5.
Many competitors respond with RAID 10, which overcomes some double-drive issues (unless on the sameside of the mirror) and performs much higher than RAID 5 does. But these improvements come at a high
price, because of the need to double the raw capacity and thus double the price.
NetApp offers RAID-DP technology and backs it up by recommending RAID-DP technology in best practice.
RAID-DP technology protects against double-disk failure and has the high performance of RAID-10 and thelow price of RAID 5.
No trade-off is required with RAID-DP technology.
Typical competitors are labeled on the bottom of the page for comparison purposes.
Cost-Effective Data Reliability
The Problem
Double-disk failure is a
mathematical certainty.
RAID 5 (single-parity disk) has
insufficient protection.
RAID 10 (mirrored copy)
doubles the cost.
The NetApp RAID-DP Solution:
Protects against double-disk
failure
Provides high performance and
fast rebuild
Provides better protection than
RAID 10 does and at a lower
cost, without impacting
performance
RAID 5 RAID 6 RAID 10 RAID-DP
Cost Low Low High Low
Performance Low Low High High
Resiliency Low High Med High
NetApp Confidential 42
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NetApp RAID-DP technology offers the highest level of protection with the best performance that is available
to protect against data loss due to a double-disk failure that results from media failure within the same RAIDgroup.
Now consider a storage array. Disks are grouped in RAID sets. RAID helps to build resiliency againstindividual disk-drive failures. Upon a drive failure, the RAID set can reconstruct the lost drive by usingmathematical redundancy that is built into RAID. The reconstruction requires that all of the bits in the RAIDdisks be read. Data loss occurs when you encounter a bit error during reconstruction read operations.
You now have the three ingredients for a perfect storm under single parity RAID:
Increased (up to two times) drive failures = more reconstructions with ATA drives. Lower bit error resiliency on ATA drives = increased likelihood of bit errors.
Larger ATA disks = larger number of bits in a RAID group = increased likelihood of bit errors.
NetApp effectively eliminates this risk with RAID-DP technology. Others can, too, with RAID 6. Thedifference is that the NetApp solution has minimal performance impact and is extremely simple to deploy.
Outstanding Customer Experience:
NetApp RAID-DP Technology
*Source: NetApp, Seagate, and Hitachi
Industry Statistics: Drive Replacements
and Media Errors Increase with Drive Capacities
0%
5%
10%
15%
20%
Up to 5% Up to 2.6% Up to 17.9%
FC
ATA
FC ATA
FC
ATA
17.9%
1.7%2.6%
.2%
5%
3%
ATA
Less than 1 in a Billion
*Media or bit error with
second failure likelihood:
double parity
(during reconstruction of
a 16-drive RAID-DP set)
.0000000001%
*Typical disk drive
replacement rate
(per year)
*Disk drive spec media or
bit error likelihood
(full-capacity transfer 300-
GB FC and 320-GB SATA)
*Media or bit error
likelihood: single parity
(during reconstruction of an
8-drive RAID 4 or 5 set)
Protected with
RAID-DP
Technology
NetApp Confidential 43
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
In the current version of Data ONTAP software, aggregates default to RAID-DP technology. They can be
changed to RAID 4 as an option, but in most cases no reason to do so exists. The majority of customers, both primary and secondary, and online and near-line storage use RAID-DP technology. The RAID group size isdefinable, but the default is the most efficient.
Aggregate Snapshot copies are required only in aggregates that use RAID SyncMirror software, including allMetroCluster configurations.
Other customer-relevant uses are:
A feed into “ WAFL_check - prev_CP”; this effectively restores the aggregate to that Snapshot copy (see
below) and then runs against it The possibility of mirroring the entire aggregate
NOTE: This restores every FlexVol volume in the aggregate to the state that it was in when the aggregateSnapshot copy was created. It is unlikely that this is what you want.
Users can use SyncMirror software to mirror aggregates if needed. Aggregate Snapshot copies are enabled by
default. A key point to consider when rolling back an aggregate Snapshot copy is that everything that iscontained in that aggregate is reverted to that point in time. reverts The SyncMirror all of the FlexVol
volumes simultaneously.
Basic Aggregate Attributes
An aggregate default RAID type is RAID-DP
technology. A RAID group size is definable for
one or more RAID groups.
Aggregates support SyncMirror software.
Aggregate snapshot copy support (enabled by
default) targets all flexible volumes that are
contained within the aggregate.
NetApp Confidential 47
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
A flexible volume is a collection of disk space that is allocated from the available space within an aggregate.
FlexVol volumes are loosely tied to their aggregates and will be even more loosely tied in the future with theimplementation of Data ONTAP functionality.
Note that, as the picture shows, both FlexVol volumes are striped across all of the disks of the aggregate. Thatis always true of a FlexVol volume, no matter what the size. A FlexVol volume can be as small as 20 MB or
as large as the entire aggregate, up to 16 terabytes with 32-bit aggregates in the current version of DataONTAP software.
Data ONTAP Storage Terminology:
Flexible Volume
A flexible volume is a collection of disk space that is allocated as a subset
of the available space within an aggregate. Flexible volumes are:
– Loosely tied to their aggregates
– The logical layer
Aggregate
FlexVol1
FlexVol1
FlexVol1
FlexVol2
FlexVol2
FlexVol2RAID
Group 0
RAID
Group 1
RAID
Group 2
NetApp Confidential 49
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Quotas are important tools for managing the use of disk space on your storage system. A quota is a limit set to
control or monitor the number of files, or amount of disk space an individual or group can consume. Quotasallow you to manage and track the use of disk space by clients on your system.
A quota is used to:
Limit the amount of disk space or the number of files that can be used
Track the amount of disk space or number of files used, without imposing a limit Warn users when their disk space or file usage is high
Quotas
Quotas are specified to
– Limit the amount of disk space that can be
used
– Track disk space usage
– Warn of excessive usage
Quota targets
– Users
– Groups
– Qtrees
NetApp Confidential 53
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This module is a quick, high-level review of NetApp core software technology. You have taken the Web-
based courses that were listed as prerequisites for this class. One of those modules provided an overview of NetApp software technology, so this module is a review of that information and an introduction to other products.
This module emphasizes that many important core features of NetApp software are inside the Data ONTAPoperating system. These features do not require a separate download, a separate install, a reboot, a separate
blade, or a gateway. These features are inside the OS, ready to be used. Many capabilities require anadditional license for customers to enable and use them, but the features are all contained within the OS.
Other pieces exist outside of Data ONTAP software. Some pieces reside on SAN hosts to help to managethose hosts and to bring management simplicity to application and host administrators. These features free
these administrators from relying on server administrators and storage administrators to accomplish basicstorage tasks.
Administration tools are available for administrating large environments. For example, Yahoo!, the largest
NetApp customer, has roughly 1,200 systems online simultaneously. How do you manage 1,200 systems?That is an important, challenging question. Even a small shop may have five systems, and if the shop has only
one IT person, it is a daunting task to manage all five systems. In response to those needs, NetApp hasadministration tools that are discussed in later modules of this course.
The topics in the center of this slide are core technologies that form the foundation of all NetApp products.
This module briefly reviews these technologies.
Core Software Technology
NetApp Confidential 5
Data ONTAP 8.1 Cluster Mode
Data ONTAP 8.1 7-Mode for FAS Systemsand for V-Series Systems
This module starts with Data ONTAP software, listed at the top of the previous slide, which is the NetApp
OS. The primary function of the Data ONTAP operating system is to flow data between client computers andthe disks or tape that are used for storage and archiving.
The Data ONTAP Operating System
Lesson 2
NetApp Confidential 6
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
In 1992, NetApp introduced the Data ONTAP operating system and ushered in the network-attached storage
(NAS) industry. Since then, NetApp has added features and solutions to its product portfolio to meet theneeds of its customers. In 2004, NetApp acquired Spinnaker Networks to fold its scalable, clustered file-system technology into Data ONTAP software. That plan came to fruition in 2006, when NetApp releasedData ONTAP GX software, the first clustered NetApp product. NetApp also continued to enhance and sell
Data ONTAP 7G software.
Having two products provided a way to meet the needs of the NetApp customers who were happy with theclassic Data ONTAP software while allowing customers with certain application requirements to use Data
ONTAP GX software to achieve even higher levels of performance (and with the flexibility and transparencythat is afforded by its scale-out architecture).
Although the goal was always to merge the two products into one, the migration path for Data ONTAP 7Gcustomers to get to clustered storage eventually required a big leap. Enter Data ONTAP 8.0 software. Thegoal for Data ONTAP 8.0 software was to create one code line that allows Data ONTAP 7G customers to
operate a Data ONTAP 8.0 7-Mode system in the manner to which they’re accustomed while also providing afirst step in the eventual move to a clustered environment. Data ONTAP 8.0 Cluster-Mode allows Data
ONTAP GX customers to upgrade and continue to operate their clusters as they’re accustomed.
A Tale of Two Products
Data ONTAP
SpinFS Data ONTAP GX
Data ONTAP 7G
Data ONTAP 8.0: 7-Mode Cluster-Mode
NetApp Confidential 8
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
FreeBSD is an advanced OS for x86-compatible (including Pentium and Athlon) and 64-compatible
(including Opteron, Athlon 64, and EM64T) ARM, IA-64, PowerPC, PC-98, and UltraSPARC architectures.It is derived from BSD, the version of UNIX that was developed at the University of California, Berkeley.The D-Blade is the ―data blade,‖ which is a software component.
The Data ONTAP 8.1 7-Mode
Operating System
Is compatible with the DataONTAP 7G operating systemfor volume-access paths andprotocol stack
Supports the Data ONTAP 7Gsoftware suite
Supports NFS, CIFS, iSCSI,FC, and FCoE
NetApp Confidential 10
7-Mode Volumes
7-ModeStack
NF S
FreeBSD
D-Blade
WAFL Virtualization Layer
RAID and Storage Interface
C I F S
i S C S I
F C
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The following trends are witnessed in the market today:
A huge ―data explosion‖ creates the need for scalability, capacity elasticity, and simple data management. Aging infrastructures create the need for business continuity, the need to protect against data loss, and the
need for data retention and archiving. Silos of data create the need for unified storage.
Changing business needs create the need for dynamic, customizable storage.
Perhaps the biggest challenge that IT decision makers face is getting a platform that can store and access allthe current and future information, adjust to changing business needs (with integrated data protection), and do
this without any disruption to clients. That means a highly scalable, shared enterprise infrastructure for thefuture.
There’s a Huge Shift in the Market
Dynamics of today’s data center
ExplosiveData Growth
Aging Dedicated Architectures
SharedResources
CIOs being forced to re-evaluate what enterprise storage means
CIO
ChangingNeeds
Apps
Servers
Network
Storage
Enterprise Infrastructure
NetApp Confidential 12
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
In the current version of Data ONTAP software, aggregates default to RAID-DP technology. They can be
changed to RAID 4 as an option, but in most cases no reason to do so exists. The majority of customers, both primary and secondary, and online and near-line storage use RAID-DP technology. The RAID group size isdefinable, but the default is the most efficient.
Aggregate Snapshot copies are required only in aggregates that use RAID SyncMirror software, including allMetroCluster configurations.
Other customer-relevant uses are:
A feed into ― WAFL_check - prev_CP‖; this effectively restores the aggregate to that Snapshot copy (see
below) and then runs against it
The possibility of mirroring the entire aggregate
NOTE: This restores every FlexVol volume in the aggregate to the state that it was in when the aggregateSnapshot copy was created. It is unlikely that this is what you want.
Users can use SyncMirror software to mirror aggregates if needed. Aggregate Snapshot copies are enabled by
default. A key point to consider when rolling back an aggregate Snapshot copy is that everything that iscontained in that aggregate is reverted to that point in time. reverts The SyncMirror all of the FlexVol
volumes simultaneously.
Cluster-Mode Terminology
Virtual Server (Vserver)
– Similar to MultiStore vfiler in Data ONTAP 7G
– Creates a namespace within a cluster Logical Interface (LIF)
– A logical path between a physical port and aVserver
Interface Group (ifgrp)
– Virtual Interface (VIF) in Data ONTAP 7G
– Creates a logical trunking of physical ports
NetApp Confidential 13
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Saleability is one of the key foundations for the future of ONTAP. From user side it provides a single
virtualised pool of all storage. From a system point of view, it provides performance and capacity scalability by adding controllers (performance) and storage (capacity). Storage is accessed via an abstraction and thecluster enables delivering the right storage and all the complexity behind the scenes
1. Scaling for performance is a given. It starts at the bottom with the appropriately designed block store(WAFL) and then moves up to supporting the latest fastest media types (flash, flash as cache, sas, etc.).And dealing with multiple faster cores. We have a fully integrated technology agenda to drive more
performance. All the more important with consolidation.
2. The sheer amount of storage we have to deal with. Consolidated data center’s have more TBs and it is nolonger enough to just have large systems. So we need a single logical pool that can be provisioned as alogical pool across lots of arrays. This is the basis of our next gen ONTAP 8.
3. How do we make sure that the storage can operationally scale? Storage admins can no longer spend timeon manual activities (provisioning, dp, tuning, etc.). This is all about efficiencies and the ability to scale
systems nondisruptively.
In the early days, the only way to upgrade was to scale up, get the bigger system, the better controller. With
Ethernet networks and the emergence of the Internet, many environments scale out with more systems.
But with a flexible platform built with these new workloads in mind, you can now scale for capacity, allowingapplications to get the performance and quality of storage necessary to run.
Scalability in Three Dimensions
NetApp Confidential 14
Capacity Scaling
OperationalScaling
Performance
Scaling
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Data ONTAP Cluster-Mode splits the standard functions by virtualizing the storage and client-access
protocols with front-end client-access protocols and back-end storage components. The back end is still thesame capable WAFL file system with RAID-DP technology, Snapshot copies, and thin provisioning. The
back end connects all the controllers in the cluster with a high-speed, reliable interconnect. All nodes can thusshare data, communicate, and synchronize together.
Data ONTAP 8.1 Cluster-Mode
Virtualization of storage anddata access from underlyingcontroller and storagehardware
Virtual server (Vserver) architecture provides the NetApp core value propositions for Data ONTAP 8.1Cluster-Mode: single-system management, a single mountpoint and namespace for NAS, a scalable containerfor LUNs, and transparent data mobility, the ability to move volumes seamlessly around all the aggregates in
all the controllers, which provides nondisruptive, nonstop operations.
The last major architectural component is the Vserver. It allows the cluster to serve data and acts as acontainer for the logical client network interfaces, volumes, and LUNs. All client data is accessed through aVserver, so a minimum of one Vserver is required.
A Cluster-Mode system can support hundreds of Vservers.
A Vserver can support any NAS or SAN protocols.
It forms a namespace and LUN container for clients and hosts to access.
It has a container for volumes that include LUNs.
The scalable container for volumes that have LUNs uses multipath I/O (MPIO) and Asymmetric Logical Unit
Access (ALUA) with all nodes in the cluster namespace for NAS.
A namespace consists of FlexVol volumes and is junctioned together at subdirectories below the root.
It forms a hierarchy that presents to the clients as a single CIFS share or NFS export.
It can support from a required minimum of one Vserver to hundreds of LUNs and volumes.
It can reside within the same Vserver, which provides a unified architecture at scale.
VS1
Data ONTAP 8.1 Cluster-ModeVirtual Server
A virtual server (Vserver)provides a logical, flexible,secure resource pool for a NASnamespace and LUNs.
All data access is through aVserver, which supportsone or more protocols.
A Vserver includesFlexVol volumes, LUNs,and logical networkinterfaces (LIFs).
A minimum of one Vserver isrequired; hundreds can besupported.
NetApp Confidential 19
Integrated Shared Architecture
LIFs
FlexVol Volumes
Vserver
LIF2 LIF1
High Availability
SAN Hosts and NAS Clients
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Vservers are the basis for multi-tenancy operations, too.
This graphic shows a second Vserver, VS2, and associated volumes, LUNs, and logical interfaces. Note that
one node hosts volumes from both Vservers, VS1 and VS2. This is fine and expected, but the logical
separation is maintained.
The new Vserver presents another namespace and set of LUNs, that is, another potential CIFS share or NFSmount for the same or different clients and with secure administration and delegated administration.
The same or different clients and hosts can mount it by using a logical interface from the new Vserver. Again,each Vserver exists with volumes and LUNs on one, some, or all of the aggregates and nodes.
You can define hundreds of Vservers in a single cluster. Vservers can use any combination of NAS and SAN protocols, which provides a true unified architecture at scale.
VS2
Data ONTAP 8.1 Cluster-ModeMulti-Tenancy
Vservers enable multiplestorage domains that sharea common resource pool.
Vservers maintain logical
separation: They definedomains for volumes, LIFs,and access protocols.
Vservers provide secure,delegated administration.
Hundreds of Vservers canbe supported.
NetApp Confidential 21
High Availability
LIF2LIF3LIF4 LIF2 LIF1LIF1
Workload Isolation
VS1
High Availability
SAN Hosts and NAS Clients
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NDOs are among the key benefits of Data ONTAP Cluster-Mode.
On-demand flexibility allows NetApp customers to seamlessly add capacity, rebalance resources, and rapidlygrow the system.
Operational efficiency provides virtualized tiered services that allow NetApp customers to match business
priorities.
―Always-on‖ provides serviceability and the ability to refresh technology without disruption to businesssystems.
This includes several components that when used in conjunction with each other can provide an ―always -on‖nondisruptive infrastructure.
The following sections of this module provide details about each of these components and discuss some of thecommon use cases and operations that each can provide:
Volume movement
LIF migration and load balancing High availability with storage failover and LIF failover
Nondisruptive upgrades
Data ONTAP 8.1 Cluster-ModeNondisruptive Operations
NetApp Confidential 25
NondisruptiveOperations
Volume MovementLIF Migration andLoad Balancing
High Availability:Storage Failover (SFO)
and LIF Failover
NondisruptiveUpgrades (NDUs)
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Consider how volume movement works. Physically, a volume is moved by a single administrator command
(CLI or System Manager 2.0) from one aggregate to another. The data copy to the new volume is achieved bya series of copies of the Snapshot copies, each time copying a diminishing delta from the previous Snapshotcopy.
Only in the final copy is the volume locked for I/O while the final changed blocks are copied and the filehandles are updated to point to the new volume. This should easily complete within the default NFS timeout(600 seconds) and almost always within the CIFS timeout period of 45 seconds. In some especially activeenvironments, sufficient data will have changed that a period of time that is longer than the timeout period is
required to copy. Options are available for managing those rare occasions. Also by using MPIO and ALUA,SAN paths are automatically updated to the optimized path after the volume moves to its new location. Withthis capability and the NAS namespace, the client’s view of the namespace is unchanged after a volumemoves, and SAN hosts continuously have access to the data.
Note that you can also move the LIFs to different ports. LIFs move automatically in the case of a node failure
or for optionally dynamically rebalancing the client connections. An administrator can also manually moveLIFs to different controller nodes as part of planned maintenance events. This is required to clear a controller
completely of volumes to take down for maintenance or to completely replace. This is covered in more detaillater in the presentation.
Cluster-Mode Transparent Volume
Movement
NetApp Confidential 27
B
CA2
A3
C1 A1
B1
B2
A
R
C2
LUN
LUNLUN
A B C
A1
A2A3
B1 B2
C1
R
LUN LUN
Uninterrupted AccessContinuous data access
by clients and hosts
Uses Snapshottechnology to copy data
to a new aggregatein the background
Nondisruptively movevolumes between any
aggregates anywherein the cluster
Storage space savings,mirror relationships,and Snapshot copies
are unchanged
NFS, CIFS, iSCSI, FC, and FCoE
HA HA
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Cluster Mode: Software Structure 2.0FAS22xx, FAS&V3210/3240/3270 and 6210/6240/6280
NetApp Confidential 42
Base
Included software delivering unmatched valueIncludes: One Protocol of choice*, base cluster key* = All protocols included at $0 on 22xx. FCP is unavailable on 22xx** SnapshotTM, thin provisioning, RAID-DP®, deduplication, cluster failover and FlexCache areincluded and preinstalled with Data ONTAP 8.1
ProtocolsAdditional protocols*
Includes: iSCSI, FCP*, CIFS, NFS* = NA for 2240
SnapRestoreAutomated system recoveryIncludes: SnapRestore®
SnapMirror Enhanced disaster recovery and replicationIncludes: SnapMirror ®
Automated application integrationIncludes: SnapManager ® for Exchange®, SQL Server ®, SharePoint®, Oracle®, SAP®, VirtualInfrastructure*, Hyper-V, and SnapDrive® for Windows® and UNIX®
* = feature currently unavailable for use
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Data ONTAP Essentials has a couple of exceptions: FAS2240 includes all protocols
In FAS2240 software structure, these features are not part of Data ONTAP® Essentials package but areincluded as part of the base Data ONTAP® software.
7 mode: Software Structure 2.0FAS2240, FAS&V3210/3240/3270 and 6210/6240/6280
NetApp Confidential
Data ONTAP®
Essentials
Included software delivering unmatched valueIncludes: One Protocol of choice, HTTP, Deduplication, NearStore, DSM/MPIO, SyncMirror ®,MultiStore®, FlexCache®, MetroCluster TM, High availability
Insight BalancePerformance and Capacity ManagementIncludes: Insight Balance
SnapVaultSimplified disk-to-disk backupIncludes: SnapVault® Primary and SnapVault® Secondary
SnapManager SuiteAutomated application integrationIncludes: SnapManager ® for Exchange®, SQL Server ®, SharePoint®, Oracle®, SAP®, VirtualInfrastructure, Hyper-V, and SnapDrive® for Windows® and UNIX®
Complete BundleAll software for all-inclusive convenienceIncludes: All Protocols, Single Mail Box Recovery®, SnapLock®, SnapRestore®, SnapMirror ®,FlexClone®, SnapVault®, and SnapManager ® Suite
43
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
COMPARISON OF DATA ONTAP 8.1 7-MODE AND CLUSTER-MODE
Although the Data ONTAP 8.0 operating system is a single code line, its two modes of operation have almost
as many differences as Data ONTAP 7G software has from Data ONTAP GX software. Except for the mostobvious difference of high availability, each mode has some features that are slightly different from theother’s, and each mode has some things that the other mode doe s not.
For example, 7-Mode has both synchronous and asynchronous SnapMirror functionality, while Cluster-Modehas only asynchronous SnapMirror functionality. Likewise, Cluster-Mode has data-protection and load-sharing mirrors, while 7-Mode has only data-protection mirrors. Data ONTAP 7-Mode supports the new 64-
bit aggregate, while Cluster-Mode did not until the release of the Data ONTAP 8.0.1 operating system .
Another big difference is that 7-Mode supports the SAN protocols of FC and iSCSI, while Cluster-Modesupports only the NAS protocols. One of the key features of Cluster-Mode is the ability to move flexiblevolumes within the namespace transparently to clients. With the release of Data ONTAP 8.0.1 software, 7-Mode supports DataMotion for volumes in SAN environments.
Although at this time differences exist, eventually Data ONTAP 8.0 software will become a one-mode
product with all the necessary features of the two current modes.
Comparison of Data ONTAP 8.1 7-Mode
and Cluster-ModeData ONTAP 8.0 7-Mode Data ONTAP 8.0 Cluster-Mode
Single-system namespace Global namespace
32-bit and 64-bit aggregates 64-bit aggregates (8.0.1 and greater)
SnapMirror Sync and SnapMirror Async SnapMirror Async only
Data protection (DP) SnapMirror Data protection (DP) and load-sharing(LS) SnapMirror
Controller failover (CFO) Storage failover (SFO)
Deduplication Deduplication (8.1 and greater)
NAS and SAN NAS and SAN (8.1 and greater)
DataMotion for Volumes (8.0.1 andgreater)
Volume move
MultiStore® software Virtual servers
NetApp Confidential 44
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
High availability carries with it the idea of many nodes that work together but that are seen externally as one
system.
The global namespaces (one for each cluster Vserver) are the external, client-facing representation of this
distributed storage. Junctions are the glue that holds the global namespaces together. Junctions are analogousto symbolic links. They connect volumes to create the global namespace of a cluster Vserver.
For the nodes to work as one, constant intracluster communication must occur over a dedicated clusternetwork. That cluster network must be reliable.
Flexible volumes can be moved among aggregates and nodes. The movement does not cause the volume’s
path in the global namespace to change, nor is the process of moving a volume seen by the client. No NFSmountpoints or CIFS shares need to change, and the volume is available for reading and writing during the
process. This is explained in more detail later in this course .
Data LIFs are not permanently tied to particular network ports and nodes. As such, they can be migrated(manually or automatically) away from problematic hardware or hardware that is heavily taxed.
Cluster-Mode Concepts
Clustered (distributed) NAS
Clustered (scalable) SAN
The ability to manage resources from anynode in the cluster (cluster-wide UI)
Global namespaces
Hierarchical volume relationships (junctions)
Replicated database (RDB) semantics
Volume movement
NetApp Confidential 45
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
For applications that need volumes that are larger than 16 TB, you must have an underlying aggregate that is
larger than 16 TB, too.
Why Larger Aggregates Are Needed
To enable larger volume sizes: Someapplications require large volumes, forexample, applications that are related togenomic research, seismic interpretation,satellite imagery, and PACS.
To reduce system-management overhead:
– Fewer drives and aggregates means manyaggregates on large systems.
– Managing more aggregates adds low-value-added tasks to a storage administrator’s
workload.
NetApp Confidential 46
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
DATA MIGRATION BETWEEN 32-BIT AND 64-BIT AGGREGATES (1 OF 2)
Professional Services
NetApp Professional Services has a service offering that can be used to migrate data and FlexVol volumeSnapshot copies from a 32-bit aggregate to a 64-bit aggregate. The offering must be purchased, and it
provides customers with Snapshot copy preservation.
Data Migration Between 32-Bit and 64-Bit
Aggregates (1 of 2)
Data ONTAP 8.1 7-Mode does not support conversion of a32-bit aggregate to a 64-bit aggregate.
– If a 32-bit aggregate or volume must expand past the 16-TB
limit, data must be migrated to new volumes that areprovisioned in a 64-bit aggregate.
Qtree SnapMirror relationships and the NDMPcopycommand migrate data that is present only in the active filesystem.
– FlexVol volume Snapshot copies are not migrated.
To migrate data with all FlexVol volume-level Snapshotcopies preserved, contact NetApp Professional Services.
– NetApp Professional Services has a service offering that canbe used to migrate data and FlexVol volume Snapshot copies
from a 32-bit aggregate to a 64-bit aggregate.
NetApp Confidential 53
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
DATA MIGRATION BETWEEN 32-BIT AND 64-BIT AGGREGATES (2 OF 2)
NDMP is an open protocol that is used to control data backup and recovery communications between primary
and secondary storage in a heterogeneous network environment.
NDMP specifies a common architecture for the backup of network file servers and enables the creation of a
common agent that a centralized program can use to back up data on file servers that run on different platforms. By separating the data path from the control path, NDMP minimizes demands on networkresources and enables localized backups and disaster recovery. With NDMP, heterogeneous network fileservers can communicate directly to a network-attached tape device for backup or recovery operations.Without NDMP, administrators must remotely mount the NAS volumes on the server and back up or restore
the files to directly attached tape backup and tape library devices.
NDMP addresses a problem that is caused by the nature of NAS devices. These devices are not connected to
networks through a central server, so they must have their own OSs. Because NAS devices are dedicated fileservers, they aren’t intended to host applications such as backup software agents and clients. Consequently,administrators must mount every NAS volume by either the NFS or CIFS from a network server that does
host a backup software agent. This cumbersome method causes an increase in network traffic and a resultingdegradation of performance. NDMP uses a common data format that is written to and read from the drivers
for the devices.
NDMP was originally developed by NetApp, but the list of data backup software and hardware vendors that
support the protocol has grown significantly. Currently, SNIA oversees the development of the protocol.
Data Migration Between 32-Bit and 64-Bit
Aggregates (2 of 2)
The following Data ONTAP 8.1 7-Mode tools can beused to migrate data:
Qtree SnapMirror relationships:
– Migrate data from a volume or qtree to a qtree onthe destination.
– If qtree-to-qtree replication is performed, one qtreeSnapMirror relationship per qtree is required.
The NDMPcopy command:
– In Data ONTAP 8.0 7-Mode, migrates data that islocated in volumes, qtrees, and directories
– Can also migrate individual files
NetApp Confidential 54
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
On-box, value-added software includes all of those features (some of them are separately licensed) that are
installed with and that run within Data ONTAP architecture. These features are not separate add-ons. Theyare always pre-installed on every FAS system that NetApp ships to customers.
Differences exist between the software structure on current systems and the software structure on newFAS3200 and FAS6200 systems.
Key Points
With the new FAS3200 and FAS6200 systems, NetApp is rolling out a software structure that delivers more
value and simplifies system configurations.
Currently, midrange and high-end systems have some software included in the base and have a menu foradding on more than 30 software products. In addition, iSCSI protocol is included with the system, whileother protocols, if needed, must be purchased separately.
The new systems have a simplified software structure. The three key features are:
More value, now standard with each system
Enhanced flexibility for customers to decide which protocol they want to include for free in their system purchase
Add-on software that is simplified to six key products or available together as the Complete bundle, withthe option to buy any additional protocols
Key Software Enhancements
Operations Manager
Protection Manager
Provisioning Manager
SnapManager® for VI
SnapManager for Exchange
SnapManager forSharePoint®
SnapManager for SAP®
SnapManager for Oracle®
SnapManager for SQLServer®
More
Current Platforms(Customers Choose Structure)
New Platforms
(Simplified Structure)
Included Software:
iSCSI protocol System management, data protection,
storage efficiency, and performanceoptimization
Extended Value Software:
Over 30 software products to choose from
More protocols
SnapRestore technology
SnapMirror products
FlexClone software
MultiStore software
MetroCluster
SnapDrive software
More Value (Now Standard) Add OnCommand management
Many NetApp products are based on Snapshot copies. Because these product names are so similar, they can
be confusing. This slide shows a SnapSuite Products Quick Reference Guide that can help you to keep the products straight.
If a Snapshot copy of an entire volume exists, and something goes wrong, the entire volume can be restored toits state when the Snapshot copy was created. Primarily, SnapRestore software is used to revert an entire filesystem back to a point in time when a particular Snapshot copy was created.
That is great protection, but that is all inside the same storage appliance. A customer may want to have itsdata replicated to another storage appliance and to another physical location. Two NetApp products,
SnapMirror and SnapVault software, can do that. So, what is the difference?
The first difference is positioning. SnapVault software is an archival application. It performs a disk-to-disk
backup and restore function and replaces tape in a given environment. So, as backup and restore technology,you can have production on system A and the destination of SnapVault software going to system B in aremote location. If something happens to the data on system A, the administrator can run a restore from
system B to system A. That certainly provides protection for system A, but the process of restoring system Afrom system B can be time-consuming, because all of the original content must be copied over the wire from
B back to A.
SnapMirror software, by contrast, is a disaster recovery solution. System B is maintained as a mirror image ofsystem A. Unlike with SnapVault software, if something happens to system A in a SnapMirror environment,
one of the available options is to bring system B online instantly as the new production server. When systemA comes back, SnapMirror software enables you to resynchronize systems A and B and move production
back to A. Within the license structure of SnapVault software, you cannot bring the destination platformonline as the production server. And even if you turn a SnapVault destination into a production server, youcan never resynchronize it with the original source platform without a complete return to baseline.
SnapVault software is for archiving, as is suggested by its name. SnapMirror software is for creating a mirror image for disaster recovery.
Data ONTAP On-Box Technology
SnapshotInstant self-service file recovery for endusers.
SnapRestoreInstant volume recovery, or largeindividual files.
SnapMirror SnapMirror Async and SnapMirror Syncremote replication over inexpensive IP. FCnow also supported
SnapVaultHeterogeneous super-efficient hourly disk-based online archiving with versioning upto weeks or months
SyncMirror
Synchronous RAID-1 local mirroring bymeans of disk shelf “plexes.” RAID-1remote mirroring product for disasterrecovery is MetroCluster.
SnapLockSEC-compliant disk-based WORMtechnology
SnapSuite Software Family: Quick Reference Guide
$ No license fee
$ License fee
$
$
$
$
$
$
plex0 plex1
IP
WindowsLinuxSolarisHP-UX
AIX
NetApp Confidential 63
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Starting with Data ONTAP 7.3, a new option called semi-sync is available and the outstanding parameterfunctionality has been removed. When using semi-synchronous mode, writes are acknowledged as soon as thesource system writes to its NVRAM. For more information, see the "SnapMirror Sync and SnapMirror Semi-Sync Overview and Design Consideration Guide" (TR-3326). An example configuration of SnapMirror Semi-
Sync is: fas1:vol1 fas2:vol2 – semi-sync Both synchronous and semi-synchronous modes of SnapMirror canonly be used on volumes, not qtrees. All modes of SnapMirror can be used with both flexible and traditionalvolumes.
The vast majority of NetApp customers use asynchronous SnapMirror between two systems to update the
mirror image as often as once per minute. SnapMirror in synchronous mode produces continuous, liveupdates between the two systems. Synchronous mode has very strict limits on bandwidth and on the distance
between two systems. Otherwise, latency will have too great an impact on application performance. Semi-sync mode is the middle ground between synchronous mode and asynchronous mode.
SyncMirror
SyncMirror was designed to handle two issues that are extremely important to Data Center Managers: RTO,
or Recovery Time Objective, and RPO or Recovery Point Objective. Customers want to minimize both thetime it takes to recover from a failure event, and they also want to minimize the data loss.
For instant recovery, SyncMirror provides two mirrors (known internally as ―plexes‖) on separate failuredomains. If one mirror goes out, then you have the other mirror instantly available. The recovery time isessentially zero. This meets the customer objective of minimizing RTO.
And to meet a customer’s Recovery Point Objective, SyncMirror provides synchronous data replication. ByRecovery Point, we are referring to the point at which your mirrored data is out of phase with your primary
production data. With SyncMirror, the mirrored data on both mirrors is always up to date,up to the second .So if one mirror goes down due to unexpected fire, power loss, or user error, the system can maintaincontinuous data availability by accessing the surviving mirror that is fully synchronized with the latest data.
Another feature of SyncMirror is that it is integrated with our Active-Active clustered failover configuration,which can provide near instantaneous failover both locally, and with MetroCluster, over a metropolitan area.
MetroCluster allows you to split a NetApp system across two locations for unified High Availability ( HA)and Disaster Recovery (DR) protection. You take an Active-Active configuration and split it across a distanceas far as 100 kilometers. In the event of a disaster, terrorist attack, user mismanagement, or even a disgruntled
employee who decides to destroy everything at one site, you still have instant access to a fully synchronized,up to the second, mirrored copy that can be as far as one hundred kilometers away.
SyncMirror is also tightly integrated with Data ONTAP for simplicity and ease of use. It is easy to administerand maintain over time, easy to install for new systems, and easy to upgrade for existing systems.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The FAS2040 system connects into a cluster by using onboard 1GbE ports. The first 8 ports of the Cisco
Nexus 5010 and the first 16 ports of the Cisco Nexus 5020 can be either 1GbE or 10GbE, depending on theSFP that is used. NetApp has released a new 1GbE SFP to enable the FAS2040 system to participate inclusters. All other controllers remain at 10GbE. A best practice is not to mix 1-G and 10-G nodes.
Cluster Switch Requirements
Cluster interconnect switches:
– Cisco Nexus 5010 and Cisco Nexus 5020
– Wire-rate 10GbE connectivity between storage controllers:
1 x 10GbE connection from each node to each switch(two ports per node total)
Interswitch bandwidth: eight ports per switch
Cluster management switch:
– Cisco Catalyst 2960
– Management connections for storage controllers and shelves
Same switch configuration for all supported storagecontrollers
NetApp Confidential 69
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Four data-access protocols are most important to NetApp products:
CIFS, developed by Microsoft NFS, developed by Sun Microsystems
iSCSI
FCThese products are referred to at NetApp as ―the core four,‖ ―the core protocols,‖ or just ―core.‖ At NetApp,the importance of these protocols is reflected in the fact that they have their own engineering group. In most
systems, 99% of the data that comes on or off a NetApp system goes through one or a combination of thesefour protocols.
When you recommend a core protocol to a customer, it is important to know which ones are included with
Data ONTAP software, which require a separate license, and which require an additional license fee. Each ofthe four core protocols requires a separate license key. However, NFS, CIFS, and FC require an additional feefor the license, while the iSCSI license is free.
Because every customer needs at least one of these protocols, the core protocol licenses are always includedas separate line items as a part of each deal that is set up in the Quote tool, CustomerEdge, and PartnerEdge.
For customer convenience, the licenses are preloaded in each storage system at the factory prior to shipping.When the customer turns the system on, the core licenses that the customer ordered are active and walk the
customer through any necessary setup.
Data ONTAP ComponentsData Access Protocols
The four core data access protocols are:
CIFS (Common Internet File System,
developed by Microsoft) NFS (Network File System, developed by Sun
Microsystems: NFSv2, NFSv3, and NFSv4)
FC (Fibre Channel Protocol)
iSCSI ( SCSI over TCP/IP)
All protocols, including iSCSI, are priced the same, withthe first protocol provided for free no matter which onethe customer chooses, except FAS2240.
NetApp Confidential 72
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Beyond the core four, additional protocols are supported by Data ONTAP software. For example, DataONTAP software can use HTTP and HTTPS to get and put files, although it is not a full-fledged Web server.
NetApp has no plan to become a replacement for Apache or IIS or any other full-featured Web server.
Data ONTAP also offers a full implementation of FTP and TFTP. Since Data ONTAP 7.0, the FTP server is
native code that is compiled in C, as everything else is — a full-fledged, robust implementation of FTP.Other supported protocols include:
NDMP for doing backups
SNMP for monitoring the system with any SNMP system
SMTP, because, while NetApp is not an e-mail server, it can send SMTP messages Telnet, Remote Shell (RSH), and Secure Shell (SSH) for access to the system
SSH and HTTPS for security purposes
All of the NetApp management tools use secure Remote Procedures Call to send API instructions back and
Unified Connect allows all protocols to run over the UTA. Because everything can be run over Ethernet and
allows customers to consolidate their IT environments fully, no need exists for separate cards or separate FCswitches. NetApp is the only storage vendor to offer this and, as such, is a leader in helping customers toconsolidate their environments.
True end-to-end networkconversion
Increased efficiency andsimplified management
Extension of the unifiedarchitecture benefits
Streamlining of IT operations,which results in lower operatingcosts
True data-center consolidation
Ability to react to market demandsfaster
Unified Connect Infrastructure (2 of 2)
Key Feature Benefits
Business Value
UTAFC10GbE
With Unified Connect
FCoE
CNA
10GbE Switch(FCoE-Enabled)
FCoE, iSCSI,NFS, and CIFS
NetApp Confidential 77
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Next you’ll hear about the off -box storage-management and administration tools that are available from
NetApp. These products are all add-ons that are not automatically installed with Data ONTAP software. Youhave learned about two of them: SnapDrive and SnapManager software. What are the others?
NetApp has adopted an open strategy. BMC is a template to be leveraged with other partners.
Today NetApp has engaged with partners who fall into three broad categories:
IT service-management and orchestration platforms from vendors like BMC, CA, HP, IBM, and Fujitsu
(Resource Orchestrator)
Management products that are provided by virtualization vendors Home-grown management platforms or the emerging cloud-management platforms
These management platforms consolidate the management of multiple elements and give services providersthe ability to manage and orchestrate their infrastructures from a single management console.
The NetApp differentiator is our partner strategy and integration. This is in contrast to our competitors, who provide access to third-party management platforms but also compete with their own management platforms.
In-HouseManagementTools
NetApp Open Management InterfacesFlexibility to Choose the Right Solution (1 of 2)
NetApp Confidential 83
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The technical reason for the existence of SnapDrive software is related to the problem of host-side caching. Ina SAN environment, the host system has a cache. When its writes are committed, they are cached to a host-side cache according to a schedule that is unknown to the storage system controller. If the storage system
controller creates a Snapshot copy, it has no idea what is in that host-side cache. The cache may be partwaythrough a root inode update. The result may be a bad Snapshot copy and anywhere from a few missing files
and some corrupt files to a completely unreadable file system. So the technical reason for the existence ofSnapDrive software is to coordinate Snapshot copies with the host OS. Essentially, SnapDrive software tells
the host OS to:
Synchronize its disks or flush its cache Create a Snapshot copy
Bring production back to normal
This coordination can happen quickly when it is integrated into the OS. It happens in a few clock cycles, but
integration with the OS is important so that the storage system controller can guarantee that every write iscommitted at the time that a Snapshot copy is created.
Another important reason for the existence of SnapDrive software is to enable provisioning and management
of backup and restore activities from the NetApp server. SnapDrive software provides OS-level integrationthat enables the server administrator to manage everything by using SnapDrive software — creating Snapshot
copies, performing restores back to previous Snapshot copies, creating new drives, mounting new drives, putting the file systems on new drives, and so on. The server administrator has control over these activitiesand does not depend on a storage team. SnapDrive software includes a complete set of tools that communicate
over Manage ONTAP API back to the storage system to control all of this management from the host serverside.
External to Data ONTAP SoftwareProducts (1 of 3)
SnapDrive Software:
Windows
UNIX
NetApp Confidential 85
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Described collectively as the Application Suite within the overall NetApp Manageability Software Family, atthis time, the following collection of SnapManager releases are available:
SnapManager for Exchange
SnapManager for Oracle and SnapManager for SAP:
UNIX Windows
SnapManager for SharePoint
SnapManager for SQL Server SnapManager for Virtual Infrastructure SnapManager for Hyper-V
Later in the course, you’ll use some of these products in labs and course modules.
External to Data ONTAP SoftwareProducts (2 of 3)
SnapManager software
Databases
Messaging
Virtualization
NetApp Confidential 86
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Another off-box NetApp software product for storage management is called Open Systems SnapVault. It isthe same protocol as SnapVault but for use when the source is administered by an open system such asWindows, Linux, or commercial UNIX, and NetApp storage is the destination. Open Systems SnapVault is
very important for remote office environments — remote offices that are too small to have their own primarystorage systems dedicated to their sites, but that have servers that need to be backed up. Doing remote office
backup can be a problem for any IT environment. Many have implemented tape at their remote offices, butchanging the tape can become an administrative burden that is neglected or not performed regularly. Open
Systems SnapVault offers a disk-to-disk backup solution that eliminates the need to change tapes.
Open Systems SnapVault was first developed by BakBone®, the company that licensed NetApp protocols tocreate Open Systems SnapVault. Now NetApp has created its own version. Some of these OEM versions(Syncsort, BakBone) have different features, but each gives server administrators the ability to back updisparate systems onto NetApp storage. Another useful feature of Open Systems SnapVault is that once the
source is backed up to NetApp storage, it is a readable, mountable, viewable file system. It is read-only and itis very easy to verify that the backup is good once it gets to the NetApp system.
NearStore Personality License is a license option that can be installed on any FAS3000 or FAS6000 system to
optimize that system for data protection and retention applications. Adding the NearStore on FAS license
enables more concurrent streams for SnapVault and SnapMirror, enables SnapVault for NetBackup™, andadds support for deduplication. NearStore on FAS systems utilize Data ONTAP for secondary storageenvironments and supports all NetApp SnapX applications for data protection and retention near-line storage.
NearStore on FAS is a general purpose storage system that can be utilized in disk-to-disk backup, data
archival, and data retention environments.
When you want to use a storage system for backup, you should optimize the storage system for backup by
enabling the NearStore personality license.
When enabled, the nearstore_option license does the following.
SnapProtect
Symantec NetBackup with Replication Director
NetApp Syncsort Backup (NSB)
Open System SnapVault
NearStore Personality License
External to Data ONTAP SoftwareProducts (3 of 3)
NetApp Confidential 87
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Manage NetApp provides the capabilities to help customers to maximize the effectiveness of their IT infrastructures inmeeting and adapting to changing service levels with minimal cost and effort (Efficiency). This is
accomplished with tools that manage the NetApp infrastructure by delivering storage and service efficiency(Control, Automate, and Analyze). Additionally, NetApp management helps to analyze the entire multivendor
infrastructure stack to assess and ensure optimal efficient use.
The circular arrow indicates that the operations of control, automate, and analyze represent an ongoing
process with IT management.
Control―How do I manage my NetApp storage infrastructure more effectively?‖
Control provides centralized management, monitoring, and reporting tools to optimize a customer’s NetAppstorage and meet business policy requirements:
Proactive real-time problem alerting and detection Comprehensive monitoring and reporting to assess the health of storage infrastructure. Customers get a
better view of what is deployed and how it is utilized, which enables them to improve storage-capacity
utilization and increase the productivity and efficiency of their IT administrators. Achievement of compliance and conformance with business policies by using enterprise-wide
configuration management and distributed policy setting
OnCommand Management SoftwareService Automation and Analytics
Automate―How can I reduce the time and complexity of provisioning and protecting my NetApp infrastructure?‖
Enabling service automation allows for the elimination of manual processes that lead to errors and costlydowntime. By using policy-based automation, customers can standardize the utilization of their storage
infrastructures. The service catalog lets customers define service levels that specify attributes of the storageinfrastructure. This allows for automating the tasks of provisioning and protection and frees the administratorfor more valuable projects.
Analyze
―I need detailed visibility into my infrastructure to gain service efficiencies and deliver on SLAs.‖
Customers can gain a holistic view of their storage infrastructures as a unified set of services by usinganalysis, discovery, correlation, service paths, simulation, and root-cause analysis.
Through the NetApp Analyze capabilities, customers get visibility into complex, multivendor, multiprotocolstorage services.
Capacity management:
Customers can continually improve storage efficiency and reduce capex and opex with efficient capacitymanagement to identify, plan, forecast, and provide the right amount on the right platform.
Virtual machine ( VM) optimization: Customers can get service-path visibility into virtual infrastructureenvironments so that they can plan and optimize the alignment of VMs and storage and eliminate capacity
and performance concerns. Assurance monitoring: Customers can provide storage service monitoring and assurance visibility into
networked storage assets to quickly understand their availability, performance, relationships, and
utilization.
AkorriWith the recent acquisition of Akorri, the OnCommand family’s ability is strengthened with performance-
capacity analytics that allow customers to plan capacity, predict issues before they happen, and troubleshootissues if they do occur.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
OnCommand Insight comprises four products, but this module focuses on Assure, Perform, and Plan.
OnCommand Insight Assure automatically discovers all resources and provides a complete end-to-end viewof an entire service path. With OnCommand Assure, customers can see exactly which resources are used and
who is using them. Customers can establish policies based on best practices, which enables Insight Assure tomonitor and alert on violations that fall outside those policies.
Insight Assure is also a powerful tool for modeling and validating planned changes to minimize impact anddowntime for consolidations and migrations. Insight Assure can be used to identify candidates forvirtualization and tiering.
Insight Perform correlates resources to business applications, which enables customers to optimize resourcesand better align resources with business requirements. Customers can reclaim orphaned storage and retier
resources to get the most out of their current investments.
Insight Plan provides trending, forecasting, and reporting for capacity management. Insight Plan reports onusage by business unit, application, data center, and tenants. Insight Plan provides user accountability andcost awareness, which enables customers to generate automated chargeback reporting by business unit andapplication.
OnCommand Insight:
High-Level Product Overview
NetApp Confidential
Balance Assure
Ensure config
SLO Identify cause
of serviceissues
Plan andvalidateservicechanges
Auditchanges
Perform
Manage and
optimizeresourceusage
Get storageserviceperformancemetrics
Align servicetiers
Plan
Manage and
plan capacity Trend,
forecast, andreport
Be cost-aware
Enablechargebackandaccountability
Availability Optimization Efficiency
Map service
health Optimize
workloads
Predict andresolveproblems
Predictability
92
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Backup:Customers need end-to-end visibility into complex virtualized environments. With OnCommand Insight, IThas a single pane of glass through which it monitors and manages its heterogeneous environment. Thatvisibility also puts IT in a position where IT proactively manages the environment so that IT can ensure that it
meets SLAs on availability and performance. IT can also ensure that configurations are in line with servicerequirements. IT can implement best practices and view vulnerabilities and violations to drive availability andefficiency.
After IT has the environment under control, IT can analyze and optimize the existing resources with service
analytics. All of the data that is captured is stored, and IT can then review and report on actual usage and better plan for capacity. This way, IT buys only what it needs. Service analytics also means that IT can reportcosts, and this can be used for chargeback of storage services, part of an overall chargeback strategy.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
When you open the Insight Perform data warehouse, you see the data marts that are contained in the data
warehouse for performance and capacity. Next in this course, you’ll dive into the Volume Daily Performancedata mart and view some of the detailed performance reports that come from Insight Perform.
Insight Perform correlates the performance from an entire environment, from application to the storage, to provide customers with performance metrics of their applications.
If you drill down to the Volume Daily Performance data mart, you can see several types of reports that areready to run. In the next few slides, you’ll view these.
Insight Plan and Perform
NetApp Confidential 93
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Insight Plan introduces a hierarchical approach to business-level storage usage and reporting. Business units
are replaced with more detailed tenant, line of business, business unit, and project trees that can be drilled intoand filled in any of the entities. Usage reporting can now be accomplished at the tenant level, which providescloud and service providers the tools to report at any of the levels to their customers. Additionally, reportingcan still be carved up at any of the levels down to the application.
Essentially, customers can report on tenants that have lines of business, business units, and projects withapplications, so customers have the full spectrum of usage.
These business entities are added into OnCommand Insight Plan. Reporting is accomplished from the local
Insight Plan server and rolled up to the DWH for enterprise-level reporting. The next two slides showexamples of local and DWH reporting.
Managing Business Entities
NetApp Confidential 94
Tenant (for cloud)
Line of business
Business unit
Project
Create business entities:
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Our network topology is extensive — from SAN block-based connectivity with FC and iSCSI to NAS file-
based attachment to LANs and dedicated Ethernet connectivity. We bring a unique offering to themarketplace — because our systems are so flexible that one storage controller can handle communication fromeither SAN or NAS. This flexibility provides a big advantage to midsized companies that have an immediateneed for NAS storage but want to move toward a SAN-style infrastructure.
Networked Storage Topology
NAS (File)
iSCSI
SAN (Block)
FibreChannel
DedicatedEthernet
Enterprise
NAS
Departmental
NetApp
FAS
Enterprise
SAN
Departmental
CorporateLAN
NetApp Confidential 6
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Unified Storage Architecture is much more than support for multiple protocols on one storage array. In most
environments of scale, multiple protocols are not run on the same box. The real benefits of unified storage areat an architecture level, not at a box level.
The big question is how to achieve the lowest cost profile while meeting the SLAs for a particular workloador mix of workloads. Consider the following questions:
Why buy more than is needed?The ability to grow and scale from low-end to high-end systems on an architecture means that customers don'thave to apply a "rip-and-replace” approach to one of the most costly parts of their IT operations— the
processes and skill sets that are required to deliver IT services to their users.
How can we help customers who are invested in an infrastructure other than ours benefit from our ITefficiencies?
Our ability to virtualize SAN systems with V-Series enables customers to achieve the benefits ofstandardization, data protection, and storage efficiency even if they are currently running EMC, HDS, or HP
storage systems.
How can customers achieve multiple cost-performance profiles within the same architecture?
We use flash-assist technologies or caching techniques to achieve high performance from low-cost drives.Thereby, we enable what some people refer to as “tierless storage.” A unified architecture means thatcustomers don't need to apply a “rip-and-replace” approach when additional I/O or, more likely, a mix of
additional I/O and cost profiles is needed for multiple applications and storage needs.
Industry-Leading Systems Portfolio:Truly Unified
NetApp Confidential 7
Same tools and processes:learn once, run everywhere
Integrated data management
Integrated data protection
Protocols Broad System Portfolio
Flash Cache
SSD
FlexCache
Cost and Performance
Unified Management
FC
FCoE
iSCSI
NFS
CIFS
One Architecture for Many Workloads
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
As the number of architectures decreases, efficiency and flexibility increases. Customers can increase storageutilization by using one architecture, rather than using a multi-array approach that requires division of thearchitecture. The ability to handle multiple workloads and deploy multiple technology options across one
architecture provides customers with the flexibility to deal with change. It is unlikely that the storagerequirements of today and the storage requirements 12 to 18 months from now will be the same .
Delivery of a unified set of tools, a unified set of processes, and one way of performing disaster recovery,
backup, provisioning, management, and maintenance produces massive benefits in terms of complexity
reduction. Complexity reduction quickly translates into cost reduction.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
What dominates our discussions with IT organizations today is the application silo model. Until a few years
ago, the application-based silo was the primary provisioning model for servers and storage.
The application-based approach begins with an application and builds a dedicated infrastructure under it:
services and storage are carved out for the application and its users. Typically, silos are independent of eachother. Often, different choices in regard to servers and storage are made for different silos. Each silo requiresspecialized skills, and often an organization is defined around a tier of service — with dedicated SAN teams ordedicated NAS teams, tier 1 or tier 2, and so on. When an application is rolled out, the first step is to purchaseand rack new hardware and infrastructure. This process can require months, so months may pass before an
application is placed into production. Then, when the roll-out is complete, it is difficult to share resources.Excess capacity and horsepower that is stranded in one silo can’t be allocated to another application orrepurposed to roll out a new application.
But server virtualization is changing this situation and paving the way for a completely different architecture,an architecture that enables one pool of resources to be shared across multiple clients. Server virtualization
has a compelling value proposition and a profound implication. The value proposition is simple: most serversare underutilized. When multiple applications are run on one server, server footprint is reduced, utilization is
increased, manpower needs are reduced, and money is saved. British Telecom, for example, reduced from3,000 servers to just over 100 blades.
Virtualization allowed applications to be decoupled from hardware. Now, applications are mobile. They canmove from server to server for load balancing, from data center to data center for disaster recovery, and intoand out of the cloud for capacity bursting, flexibility, and cost. IT organizations can build a broad,
homogeneous, horizontal server infrastructure that is capable of running multiple applications simultaneously.And server virtualization breaks the cycle of having to install new hardware in order to deploy newapplications. Resources can flow to where they are needed. Applications can be moved around, and a degreeof standardization can be achieved.
A flexible and efficient foundation is essential.
Evolving Data-Center Design
NetApp Confidential 8
Flexible and Efficient Shared ITInfrastructure
Traditional Approach
Application-Based Silos
Public CloudZones of Virtualization
Private Cloud
Storage
Servers
Apps
Network
Management
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
However, some companies discover that their storage infrastructure doesn’t provide them with the level offlexibility and efficiency that they need. We work with companies every day who are spending most of theirvirtualization-resource efficiency gains on their storage infrastructure. So, a fundamentally different approachis required for storage.
We were early to recognize the need for virtualized storage. We have been delivering virtualized storage foryears. Customers want to build not only a broad horizontal infrastructure that can run multiple applicationsfor servers but also an infrastructure that uses maximizes storage efficiency.
The silo model is being replaced by the virtualization model. The model of running multiple applications on a
server infrastructure that is optimized for flexibility, speed, and scale leads to a broader shared ITinfrastructure. Various terms are used: virtual data center, dynamic data center, virtual dynamic data center,internal cloud, and private cloud. We use the term “shared IT infrastructure.”
We expect the silo and virtualized models to coexist for years, but eventually application-based silos will be
relegated to legacy applications that will never be migrated or to a small set of key applications in the datacenter that warrant their own dedicated infrastructures. As time passes, the vast majority of storage and thevast majority of the applications will move to the shared infrastructure.
NetApp is the clear leader in the new shared IT infrastructure world. Our underlying architecture and design
approach, the partnerships that we have built in the market, and our commitment to customer success make usthe storage foundation of choice for virtualized, shared infrastructure.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Most of the systems within our FAS portfolio were refreshed last year.
In November 2011, the entry line was refreshed. We are now adding a new member to the family, theFAS2240 system. This system becomes the flagship, high-end offering of the entry line. This powerful system
comes in 2U and 4U configurations.
We introduced new fixed configurations for the FAS2040 system with upgraded technology at a lower price.
The FAS2040 system is now the entry-level offering for our Enterprise portfolio, replacing the FAS2020system and beating its price point. The old FAS2040 SKUs and the FAS2020 SKU were placed on end-of-availability (EOA) on November 8, 2011.
All products in our line support Data ONTAP 8.0, providing a truly unified system portfolio. Regardless ofwhere customers enter or purchase into our Enterprise line, they gain the increased efficiency and flexibility
that is offered by Data ONTAP 8.0. With these enhancements to our portfolio, we offer not only a system thatcan compete with competitors such as VNXe but also a no-compromise portfolio that can beat VNXe andother competitors.
Key points:
A truly unified portfolio
The best storage platform for efficient IT infrastructure
An approach that differentiates our offerings from competitors’ offerings Most efficient
Extremely flexible (in terms of performance, capacity, expandability) Delivering the best value to the customer
Refreshes and Additions
NetApp Confidential 9
More powerful, affordable, and flexiblesystems for midsize businesses anddistributed enterprises
Unified Storage Architecture
720 TB240 Drives
FAS/V3210
4,320 TB1,440Drives
6-TB FlashCache
FAS/V6240
4,320 TB1,440Drives
8-TB FlashCache
FAS/V6280
FAS/V3240
3,600 TB1,200Drives
3-TB FlashCache
FAS/V6210
2,880 TB960 Drives2-TB Flash
Cache
FAS/V3270
432 TB144 Drives
FAS2240
408 TB136 Drives
FAS2040 1,800 TB600 Drives1-TB Flash
Cache
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
V-SERIES OPEN STORAGE CONTROLLERS: V6200 AND V3200 SYSTEMS
There are two new V-Series systems: the V6200 series and the V3200 series.
Key points:
To complement the new FAS systems, we offer six new V-series systems.
V-Series systems support disk arrays from major storage vendors. V-Series systems builds on the customer’s current storage investment to satisfy unmet needs.
V-Series systems enable customers to gain the benefits that NetApp can deliver.
V-Series Open Storage Controllers:V6200 and V3200 Systems
NetApp Confidential 10
Support for Disk Arrays from Major Storage Vendors
V-Series systems build on current storageinvestments to satisfy unmet needs.
V62802,880 TBV6240
2,880 TBV62102,400 TBV3270
1,920 TBV32401,200 TBV3210
480 TB
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
FAS2000 controllers were added to the product line in 2009. They have fast CPUs and memory and were
designed to operate within a high-availability architecture. The FAS2000 systems replaced the FAS200systems, which were popular products for remote offices and small company installations.
The FAS2000 systems use a high- performance storage technology called “SAS.” Baseboard ManagementController (BMC) is a feature that is unique to the FAS2000 series and that enables remote management. TheBMC feature is similar to the RLM port (control) that is available on the FAS3100 and FAS6000 systems.
Both SATA and SAS disk drives are available internal to the box and FC, and SATA can be used externallythrough the expansion shelves. FAS2000 systems are RoHS-compliant.
FAS2000 Series: Architecture Highlights
The FAS2000 series is a NetApp entry-levelenterprise platform.
– Fast CPU and memory architecture
– High-availability cluster in a box
– Either SATA or SAS storage architecture
– Increased onboard I/O connectivity
The series introduces BMC (Baseboard ManagementController) remote management technology.
SAS and SATA disks are available.
The series is RoHS-compliant (hazardoussubstances).
NetApp Confidential 13
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Each FAS2000 system is an "all-in-one” system; that is, all components are inside the unit.
In the FAS2000 series and the FAS200 series, a controller and a storage shelf are built into one unit. Here isthe FAS2040 and the FAS2240, one of the two latest editions to the FAS product line. In FAS2000 systems,SAS drives are used.
External DS14 shelves can be added to it — up to 84 additional FC or SATA spindles. At this time, there are
no external SAS drives.
SAS or SATA drives may be present in the controller head units of the FAS2040 and the FAS2240 and the
FAS2020 systems. The FAS2020 system is smaller than the FAS2040 and the FAS2240 system. TheFAS2020 system has 12 SAS or SATA drives and the ability to add two additional shelves.
FAS2000 systems can be clustered. The FAS250 systems, which the FAS2000 systems replaced, cannot beclustered. All FAS2000 systems are capable of the four core protocols and have FC connectivity. Because theFAS2040 and the FAS2240 system has a PCI slot, it can be expanded. FAS200 systems cannot be expanded.
Typically, the additional port is used for expansion shelves.
The interconnect is across the backplane of the chassis. There is no separate CFO card.
Simplified management: OnCommand management software (System Manager to optimize day-to-day performance, provisioning capability to streamline storage provisioning, and protection capability to help
secure business critical data Increased availability: RAID-DP, technology Snapshot technology, DSM and MPIO, SyncMirror
– High availability: RAID-DP technology, NetAppSnapshot copies, device-specific module(DSM) and multipath I/O (MPIO), SyncMirrorsoftware, Open Systems SnapVault
– Secure multi-tenancy: MultiStore software
NetApp Confidential 17
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The updated FAS2040 system includes all protocols and is offered at a price that is comparable to the price of
the FAS2020 system.
The FAS2040 system is equipped with Data ONTAP Base, which includes the software listed in the top-left
cell of the table. These items are provided at no additional cost to the customer. Therefore, even with ourentry-level system, customers receive the industry-leading efficiency tools that NetApp is known for. Also,customers can use System Manager to experience greater control, better visibility, and increased simplicity inmanaging their environments. The FAS2040 system retains its current pack and bundle structure, socustomers who want additional capabilities can choose one or more of eight software options.
The FAS2240 system also includes all protocols and all components of Data ONTAP Essentials. So , theFAS2240 system has the same software structure that our mid-level FAS3200 systems and high-end FAS6200
systems have.
In addition to providing all of the features provided by Data ONTAP Base, Data ONTAP Essentialsautomates management via OnCommand management software and adds the secure multi-tenancy features
that are provided by MultiStore software.
To add software, customers just turn on a license. They can purchase enhanced capabilities one-by-one or
FAS/V3200 systems are the perfect building blocks for shared IT infrastructures. The three new systems are
FAS/V3210, FAS/V3240, and FAS/V3270.
FAS/V3200 systems offer the best value for mixed workloads. The systems were designed to cost-effectively
deliver a strong combination of benefits and the flexibility that supports mixed workloads.
FAS/V3200 systems also provide the scalability and flexibility that enables customers to be future-ready:
50% more PCIe slots (12 versus 8) provides for more connectivity options or more Flash Cache modules(up to 2 TB in the FAS/V3270 system).
Scalability of up to 2 PB of storage capacity handles requirement increases, especially for virtualized
shared storage environments.
FAS/V3200 systems provide higher performance than FAS3100 systems provide (typically ~25% gain for the
FAS/V3270 system over the FAS3170 system and ~50% gain for the FAS3240 system over the FAS3140system).
With the FAS/V3200 systems, an additional service processor and an alternate control path (ACP) enable
additional diagnostics and nondisruptive recovery (same as with FAS/V6200 systems).
FAS/V3200 systems also leverage the advantages of Data ONTAP 8 and the NetApp Unified Storage
Architecture (one OS, consistent management software, multiple protocols, integrated data protection, andmultiple tiers of storage) to provide industry-leading storage efficiency. For example, deduplication andcompression help customers control data growth.
The FAS3200 and V3200 Series
The best value for mixed workloads
Future-ready flexibility and scalability
– 50% more PCIe connectivity
– Up to 2 PB of storage capacity
Unified architecture and Data ONTAP 8.0, which is the
storage-efficiency leader
FAS3240FAS3270FAS32 Perfect Building Block for Shared IT Infrastructure 10
NetApp Confidential 21
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Target applications and customers for FAS3200 systems:
Business and virtualization applications Storage consolidation and server virtualization
Windows storage consolidation Enterprises and midsized businesses — primarily the FAS3210 system for midsized businesses
Enterprise and midsized-business customers appreciate the value that the FAS3200 series delivers through itsefficiency, flexibility (through expandability and scalability), and performance.
The FAS3270 system is great for enterprise midrange storage. It serves as a building block for shared IT
infrastructure and facilitates storage consolidation.
The FAS3240 system is the flagship product, with strong fundamentals in the price-sensitive enterprise space.
It is particularly useful for mixed workloads and delivers scalability and performance at a great price.
The FAS3210 system is particularly useful for mixed workloads in the medium-to-small-enterprise (MSE)market and for Windows storage consolidation.
Additional opportunities are available to MSE customers :
The V3210 system
Flash Cache, which automatically boosts performance
NOTE: Flash Cache is not offered in the FAS2000 family, and there is not a V2000 family for MSEcustomers.
FAS3270For storageconsolidations and
server virtualization
FAS3240For mixed workloads(the best price andperformance)
FAS3170: two 64-bit dual-core 2.6 GHz FAS3210: one 64-bit dual-core 2.3 GHz
FAS3240: one 64-bit quad-core 2.3 GHz FAS3270: two 64-bit dual-core 3.0 GHz
Key points:
Three new FAS/V3200 systems: FAS/V3210, FAS/V3240, and FAS/V3270 Three primary differences between the three models: expandability, scalability, and performance
The expansion capabilities of the FAS/V3270 system and the FAS/V3240 system are equal (on-boardconnectivity and 12 PCIe slots) for host and back-end connectivity and for Flash Cache (for example, up to 2
TB of Flash Cache in the FAS3270 system). Both systems have more expandability than the FAS/V3210system (four PCIe slots and on-board I/O).
There are two versions of the FAS/V3270 and FAS/V3240 systems — with and without expanded I/O. Most
customers choose to purchase and deploy the expanded I/O systems (which are 6U tall, instead of 3U tall) — because the additional height enables 12 PCIe slots, for additional connectivity and for Flash Cache modules.
The FAS/V3270 system can scale up to almost 2 PB of storage capacity. The FAS/V3240 system can scale upto 1.2 PB. The FAS6210 system can scale up to 480 TB.
Among the FAS/V3200 systems, the FAS/V3270 system delivers the highest performance, and theFAS/V3240 system delivers more performance than the FAS/V3210 system. These differences aredetermined by the characteristics of the multicore processors and the amount of memory that are designed
into each system.
FAS/V3200 Key Specifications
* With I/O expansion module
** For Data ONTAP 8.0 and earlier, maximum capacity is half the amount that i s specified.
FAS/V3170 FAS/V3210 FAS/V3240 FAS/V3270
Number of Processor Cores 8 4 8
Memory 32 GB 8 GB 16 GB 32 GB
NVRAM 4 GB 1 GB 2 GB 4G B
I/O Expansion Module -- -- Yes
Maximum Number of PCIeSlots
8 4 12*
Onboard I/O4 x GbE
8 x 4Gb FC4 x 6Gb SAS, 4 x GbE, 4 x 4Gb FC
Maximum Number of Spindles 840 240 600 960
Maximum Capacity** 1680 TB 480 TB 1200 TB 1920 TB
Maximum Aggregate Size 70 TB 50 TB 50 TB 70 TB
Data ONTAP 7.2.5+ 7.3.5 and 8.0.1
NetApp Confidential 33
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Compared to the current FAS/V3100 systems, the new FAS/V3200 systems offer:
Greater flexibility, as a result of expandability and scalability improvements (the FAS/V3270 systemoffers 50% more PCIe slots and 15% more storage capacity than the FAS/V3170 system offers)
Improved performance (increase varies by systems)
Higher availability (from the new service processor and the alternate control path)
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Assume that your customer wants to expand his FAS3240 system. He needs an additional dual-port optical
adapter. In this exercise, you identify the required part number and the available expansion ports.
1. On your laptop, log in to the NetApp Support site: https://now.netapp.com/eservice/SupportHome.jsp
2. Once the Support Site comes up, look for Documentation. Look the right in the More Resources box,click Interopreability, System Configuration Guide.
3. On the left side of the screen, part way down, locate and select System Configuration Guide.
4. From the drop-down menu, select Release 8.0.1 7 Mode, and click Go.5. On the left, locate and select NetApp storage systems.
6. From the FAS3000/6000 menu, select FAS3240 and Expansion slots/cards.7. In the center of the screen, select Expansion Slot Assignments for a FAS3240A in an HA environment.8. Locate the card part number and the relevant expansion slot numbers.
You can depend upon the accuracy of the data that the System Configuration Guide provides. The guide isupdated constantly, and NetApp engineers are committed to ensuring that the data is accurate and timely. For
each release of the Data ONTAP operating system, the data is encoded in a file. Systems expect to be properlyconfigured (as prescribed by the guide) and recognize when they are not properly configured.
Mini-Exercise: I Want a New Card
Assume that your customer has a FAS3240 system. Thecustomer is impressed with its capabilities, so much sothat the customer wants to use its features for additional
projects. To build the desired configurations, the customer needs
more ports, so the customer wants to buy a dual-portoptical GbgE adapter card.
The controller is running Data ONTAP 8.1 7-Mode.
Identify the part number of the card and the slot numbersin which the card can be placed.
Refer to the System Configuration Guide for the FAS3240 , which can be accessedfrom the NetApp Support site.
NetApp Confidential 34
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The three FAS6200 systems (FAS6210, FAS6240, and FAS6280) are designed for large-scale, shared IT
infrastructures.
FAS6200 systems provide the performance that the most demanding workloads require. FAS6200 systems
deliver twice the performance that other FAS systems deliver. Performance will continue to increase, as theData ONTAP 8 operating system is enhanced and tuned.
FAS6200 systems provide the scalability and flexibility that is required to be future-ready:
Ability to scale to 3 PB of storage capacity to handle increasing requirements, especially for virtualizedshared storage environments
Flexibility in regard to connectivity — more than twice the number of PCIe slots that FAS6000 systems provide
PCIe slots that can be used with Flash Cache modules to further increase performance (up to 8 TB inFAS6280)
Built-in, high-bandwidth connectivity — 10GbE, 8-GB FC, and 6-Gb SAS — ready to meet any
connectivity requirement that future deployments require Enhancement of enterprise-class availability
An additional service processor and an alternate control path (ACP) that enable additional diagnostics andnondisruptive recovery
FAS6200 and V6200 Series
High performance for demanding workloads
– Double the performance of other FAS systems
– Ongoing performance gains via the Data ONTAP 8 system
Future-ready scalability and flexibility
– Up to 3 PB of capacity and double the PCIe connectivity of other FAS systems
– Built-in 10 GbE, 8-Gb FC, and 6-Gb SAS
Enhanced enterprise-class availability: service processor and alternate controlpath (ACP)
FAS6280
FAS6210
FAS6240
NetApp Confidential 37
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
For orders placed after September 12, 2011, 1TB of Flash Cache (512 GB per controller) is included with the
FAS/V6240 and FAS/V6280 systems (but not with the FAS/V6210 system).
Together, NetApp intelligent caching (Flash Cache) and storage efficiency features (for example,
deduplication) enable the virtual storage tier (VST), which optimizes performance and reduces costs. VST ishighly effective for virtualization environments, databases, messaging, and numerous applications.
With VST, NetApp introduced a better approach — intelligent caching. This technology is optimized for Flashand is not simply an adaptation of older-generation disk-tier solutions.
The NetApp VST promotes hot data to performance storage without moving the data. The data block is
copied to the VST, but the hot block remains on hard-disk media. With this approach, the operational disk I/Ooperations that are required by other approaches to move data between tiers is not needed. Also, when the
activity of the hot data on Flash trends down and data blocks become cold, the inactive data blocks areoverwritten with new hot-data blocks. Again, the data is not moved.
This no-movement approach is highly efficient. It not only eliminates wasteful operational I/O but alsoenables the application of advanced efficiency features such as data deduplication and thin provisioning.
Granularity is key to the ability to place the most efficient amount of data into the intelligent cache. NetApp
VST uses a block size of 4K. This granularity prevents cold data from being promoted along with hot data.
Contrast this approach to other company’s approaches, which promote data blocks that are measured in MBor even GB.
VST is simple to install and works out of the box with its default settings. The flexibility of VST enables thecreation of multiple classes of service by enabling or disabling the placement of data into the VST on a
NetApp designs enterprise-class high-availability into its storage products.
The NetApp portfolio delivers proven data availability (the whole storage infrastructure: system, disk shelves,and software). Across thousands of customer deployments, AutoSupport data shows better than 9x5
availability. The industry-analyst firm IDC validated this finding in a white paper (on the Field Portal and on NetApp.com).
With the FAS6200 series (and with the FAS3200 series), enterprise-class high availability is further enhancedvia provision of these features:
Service processor, for lights-out management
Alternate control path to storage, for nondisruptive recovery
With the HA software that is provided with the Data ONTAP system and the MetroCluster software that
customers can purchase, mission-critical applications are protected and planned and unplanned downtime iseliminated.
NetApp Enterprise-Class HA
Better than 5x9 availability
– Demonstrated in real customer environments
– Validated in a white paper by industry-analyst firm IDC
New enterprise-class availability features – Lights-out management via a new service processor
– Nondisruptive recovery through a storage alternate controlpath (ACP)
Continuous data availability for mission-critical applications(MetroCluster software eliminating planned and unplanneddowntime)
NetApp Confidential 43
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NVRAM8 is two cards in one: the interconnect hardware card for HA and the NVRAM electronics card. In
this regard, NVRAM8 is similar to NVRAM5 and NVRAM6. However, unlike with NVRAM5 and NVRAM6, NVRAM8’s HA and NVRAM functions are handled by separate chips.
NVRAM is a key element of NetApp technology. It enables writes to disk to be completed efficiently. Itaccomplishes this task by allowing writes to be delayed until they can be performed in one burst and byinsuring that the data is not lost by power outage or system panic before the burst is committed to disk.
The HA function carries the process one step further by linking two controllers into a redundant pair. The HAlink enables one controller to perform high-speed updates of the other controller's NVRAM with data that is
not yet committed to disk. If one controller fails, the other controller completes the tasks that the failedcontroller did not complete.
No longer is battery power used to hold contents in DRAM memory for a minimum of three days. Instead,when system power is lost unexpectedly, NVRAM8 performs a de-stage operation. The contents of DRAMare moved to flash components within a minute of the power loss and then the card shuts down completely.
The battery is not needed to preserve customer data. When system power is restored, the Data ONTAP systemtransfers the contents that are in the flash components back to DRAM and replays the NVRAM log from
DRAM memory.
Like NVRAM5 and NVRAM6, NVRAM8 uses InfiniBand as the protocol for the interconnection betweenthe redundant pairs of controllers for HA solutions. With the advent of NVRAM8, the speed of the link
doubled from SDR (2.5 Gb per second per lane or 10Gb per second per link) for NVRAM5 and NVRAM6 toDDR (5 Gb per second per lane or 20 Gb per second per link) for NVRAM8. Like NVRAM7 in Spectre
(FAS3100 series), a chassis with two controllers does not need external cables to make the HA connection.
NVRAM8 features an additional high-speed connector to the controller board. This connector is part of a physical link over the midplane to the other controller. A special LED on the PCI bracket lights up when two
controllers with NVRAM8 are present in a chassis.
NVRAM8 Architecture
The NVRAM8 is non-standard in height.
The dedicated controller slot 2 is based on power andcooling requirements.
FRUs can be installed and removed without tools.
NetApp Confidential 47
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
For both the software and hardware modules, the NVRAM card is referenced. The NVRAM6 card is
currently used on NetApp systems.
The battery uses Lithium Ion technology. Three or five 1.95Ah cells provide a total of 5.9 or 9.8Ah at 4.1V. If
an external power failure occurs, this configuration can supply onboard power for at least three days.
Each card contains two independent chargers. Together, the chargers charge the battery in less than 10 hours.
When the system is powered on, each charger is ON by default. Safety circuitry is built into the battery pack.
Each card has two InfiniBand CFO connections. A card has one or two batteries. If a card has 512 MB ofmemory, it has one three-cell battery. If a card has 2 GB of memory, it has a second battery. Therefore,
NetApp guarantees at least 72 hours life of the battery.
Typically, a battery lasts longer than 72 hours, but NetApp guarantees at least 72 hours. Some people say that
72 hours (3 days) is not very long. However, 72 hours can be sufficient to enable the processes that preventdata loss.
In most cases, within standard storage environments, backup power is available. When power is restored to a
system and the system is rebooted, the Data ONTAP operating system cleans the dirty writes and commits the
clean writes to disk. At that point, a clean shutdown can be executed via the “halt” command. Then,
NVRAM contains no data. The system shuts down completely, and all data is committed to disk. If all data
can be removed from NVRAM within the three days that the battery provides power, no data can be lost.
NVRAM6
512 MB/2 Gb DIMM 3-Cell Battery
2-Cell Battery(2 Gb Version Only)
IB CFO Connectors
NetApp Confidential 48
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
There are three E5400 models, each of which has a unique form factor. The E5460 is a 4U, 60-drive system;
the E5424 is a 2U, 24-drive system; and the E5412 is a 2U 12-drive system.
Each system has dual controllers, supports a range of SAS drive types, as well as the ability to intermix the
different drive technologies.
With these three unique models, the E5400 provides a variety of starting points to best meet solution and/or
customer requirements.
The E5460 is a great fit for big data solutions in that it delivers the highest combination of performance andcapacity. The E5460 delivers up to 6 gigabytes of sustained bandwidth, and supports up to 180 terabytes of
raw capacity. Additionally, the E5460 supports the widest range of drive technologies, from high- performance SSDs to high-capacity near-line SAS drives, making a great fit for any environment.
For performance density, the E5424 delivers the highest bandwidth per U. With up to four gigabytes persecond on reads, and 2.5 gigabytes per second on writes, nothing packs more throughout into such little space.Its 2.5” drives deliver great performance per watt. And the E5424 meets the NEBS level 3 and ETSI Telcospecifications.
The E5412 is a great choice for smaller configurations. And like the E5424, meets the NEBS level 3 and
ETSI Telco specifications.
In many cases, these three models deliver the performance, density and capacity required for building big datasolutions. But when the situation calls for more capacity or performance each system supports expansion
through any of its three disk shelf options: the DE6600, DE5600 or DE1600.
Let’s take a look at these now.
E2600
E-Series Controller Models
Dual active controllers
Support intermixed SAS, SSD drive types
Support disk shelves for expansion with 12,
24 or 60 drives
E5400
NetApp Confidential 54
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Before we move forward, let’s take a look at the high-level positioning for the two NetApp platforms.
ONTAP will continue to focus on the high feature content markets, such as Enterprise IT and Cloudinfrastructures, where robust data management features are required.
The E-Series platform will be used to enter the emerging big bandwidth and big data markets where the focusis on pure performance and data protection. For these markets we will create solutions based on the E-Series
platform. It’s important to note, that that E-Series storage is only available directly from NetApp as part of aBig Data solution.
Big data means different things to different people, so before we move on let’s put some framework around
what we mean by big data. We actually see big data as three fairly unique opportunities. The first is analytics,which ranges from structured enterprise-class data warehousing solutions, such as Teradata, to a new
generation of appliance-like devices coupled with open-source software to build scalable, cost-effectivecompute farms for data analysis.
The second big data opportunity is bandwidth. These environments, such as high performance computing,
rich media, video, and so forth, are generating enormous amounts of data and put unnatural stresses andstrains on traditional storage systems.
The third market is around content which is the age old problem of having the rate of unstructured data
growth greatly exceed the rate of scale in conventional systems.So we see the whole ecosystem of big data in these three dimensions, which you’ll see referred to as ABC for
analytics, bandwidth, and content. And for these markets we’ll use the E-Series platform to create solutionstailored for new verticals, which we’ll look at now.
A New Platform for a New Market
NetApp Confidential 55
NetApp’s unified architecture and Data ONTAPoperating system will continue to target EnterpriseIT and Cloud Infrastructure markets
– Robust data management requirements
NetApp will use the E-Series platform to enterthe emerging Big Bandwidth & Big Data markets
– Focused on pure performance and data protection
– Available only as part of an E-Series solution
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
We’ve identified six initial Big Data solutions for the E-Series Platform. The first, which was announced back
in May, is a full motion video solution. The FMV solution combines Quantum StorNext software and E5400storage to create a single architecture for ingest, exploitation and dissemination. The FMV solution candeliver over 20 gigabytes per second of read and write throughput and over a petabyte of raw storage in asingle rack.
The other solutions, which will roll out over the coming months, include three more bandwidth solutions:Media Content Management and two HPC solutions -- seismic processing and Lustre. The first analyticsolution released will be for Hadoop. And the initial content solution is StorageGrid.
These six solutions are the only way to purchase E-Series storage directly from NetApp. And for each of thesesolutions, a custom-configured E-Series storage system is tested and integrated with 3rd party software to
create a turnkey solution designed to meet the specific requirements of that vertical. Additional trainingcourses, presentations and collateral are available for each of these solutions.
NOTE: It’s important to note that this course covers the full feature set and capabilities of the E-Series
platform. Solutions built on the E-Series are architected to include the specific product attributes that bestmeet the workload, capacity and form factor requirements for that vertical. As a result, some of the features
and capabilities discussed in this course are not offered or relevant for a given E-Series solutions. Please referto solution documentation and collateral for an understanding of the E-Series attributes offered as part of the
solution.
E-Series Solutions
Media Content ManagementMulti Petabyte capture andplayback platform for richmedia content creators
HPC: Seismic ProcessingHigh bandwidth / high densityplatform stores large volumesof 2D, 3D and 4D seismic datawith scalable growth
HPC: LustreMassively parallel distributedfile system for large scalecluster computing
Primary SATA storage was introduced in May 2005. For the past couple of years, NetApp has used SATA
storage on FAS systems. SATA storage is intended for primary applications. SATA storage enables NetAppto provide customized solutions.
The target markets for SATA are latency-insensitive primary applications. Latency considerations are veryimportant. ATA drives are inexpensive and widely available, but they are slow. To maintain less than 20-mslatency, an ATA drive can provide approximately 40 IOPs. However, to maintain the same level of latency, a15,000 RPM FC drive can provide approximately 200 IOPs.
You must carefully consider latency. You must ensure that, on installation, SATA drives are placed where
latency is not relevant. For example, you might use SATA drives in home-directory environments and-readonly warehouses.
Where latency is critical, do not use SATA drives. Therefore, in most cases, you should not use SATA drivesin production environments.
SATA
SATA characteristics are
– Enhanced parallel ATA
– Faster transfer speeds (more than 150 Mbps)
– Thin cable connections (7-pin)
Primary SATA storage is
– A storage hardware option for controllers
– Intended for primary applications
– Intended to match application storage requirements withsolution costs
NetApp Confidential 60
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
SATA drives should be used only if primary applications do not require peak storage performance. To
determine whether the use of SATA drives is appropriate, analysis is mandatory!
If SATA storage is appropriate, recommend it. SATA storage provides a cost opportunity, because SATA
storage is cheaper than FC storage.
NOTE: If SATA drives are placed into a production environment that is beyond their capabilities, the drives
must be replaced, and the customer loses confidence in NetApp and in the people who recommended the useof SATA drives. Before you recommend the use of SATA drives, analyze the sizing and performancerequirements of the situation.
SATA Storage: Target Markets
Latency-insensitive primary applications
– Home directories
– Data warehouses
Instances in which primary applications do not requirepeak storage performance
– To determine suitability, analysis is required.
– For situations for which SATA storage is appropriate:
Target highly competitive deals
Deny opportunities to competitors
Craft finely tuned solutions
NetApp Confidential 61
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
SAS drives and FC drives have identical performance profiles, but management and reliability considerations
make SAS drives the more attractive solution.
With SAS, the limit on the number of devices that can be connected is determined by bandwidth.
With FC, the maximum number of addressable devices is 128. Therefore, there can be only four shelves perloop. With SAS, additional loops can be created, so there is no port burn (as there is on FC in very large
system environments).
Bandwidth over SAS can be better than bandwidth over FC. SAS drives are currently a little less expensivethan FC drives.
Few SAS storage devices are available, and no standalone storage systems have SAS drives. Sun is the only NetApp competitor that offers a SAS-class drive.
SAS Usage
When compared to SATA, SAS providesthese advantages:
– Higher performance – Higher I/O per second
– Faster response times
Higher I/O per second is required for small,random-read, intensive application workloads(typical of Microsoft Exchange and OLTP).
NetApp Confidential 63
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
SAS and FC drives spin at the same speeds. There are 10K and 15K SAS drives. The only difference between
SAS and FC drives is the interface.
SAS has matured as a drive option. Unlike FC drives, SAS drives have management traffic on one channel
and data traffic on another channel. Therefore, a loop initialization primitive (LIP) storm, which can easilyoccur on FC drives, cannot occur on SAS drives.
If a storm occurs, it occurs on the management channel and does not affect data traffic. On SAS, every devicecan be reset. On FC, device resets are quite disruptive, and the loop may or may not stay up.
In the FC-Arbitrated Loop (FC-AL) protocol, a device that enters the loop and attempts to initialize sends out
a LIP to request an address. All other activity on the loop stops, as each device re-establishes its connectionwithin the new configuration. A LIP storm occurs when all of the drives on a FC-AL loop (which may be a
large number) attempt to change or re-establish their names and numbers on the loop. Because SAS uses aseparate channel for drive management, LIP storms do not affect the data transmission channels.
Similarities Between SAS and FC Drives
NetApp Confidential 64
FC* SAS*
Rotational Speed 15,000 RPM 15,000 RPM
Average Rotational Latency 2.0 ms 2.0 ms
Seek Time Average Read/Write 3.5 or 4.0 ms 3.5 or 4.0 ms
Transfer Rate (Maximum) 125 MBps sustained 125 MBps sustained
Number of Interface Ports 2 2
Except for the drive interface, SAS drives andFC drives are mechanically the same:
Same magnetic, mechanical, electronic, andmicrocode technologies
Same rotational speeds
Same reliability
* For FC and SAS drive specifications: http://www.seagate.com/docs/pdf/datasheet/disc/ds_cheetah_15k_5.pdf
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
What is the likelihood of two drives within a RAID group failing simultaneously? The answer depends on the
definition of “failure.”
If “failure” refers to the hardware failure of a drive, the likelihood of two drives failing simultaneously is very
small (tiny).
If “failure” refers to the following scenario, the likelihood significantly increases:
1. One failure occurs.2. The system performs the reconstruction process.3. During the reconstruction, a bit error occurs on a drive.
The system considers the bit error to be a second failure, and all data is lost.
When a bit error occurs, the drive is still usable (a good drive with probably nothing wrong). The drive can be
reformatted and reinitialized, but the data cannot be recovered. A RAID failure has occurred.
The likelihood that a bit error will occur on a SATA drive during reconstruction is approximately 18%. Thus,one reconstruction in five is expected to fail. The frequency of occurrence is a function of (a) the size of the
disk drive and (b) the typical drive-level, error-correction capabilities.
Therefore, because RAID-DP technology eliminates the bit-error risk, it is recommended for all drives.
Reasons for Using RAID-DP Technology
To enable primary application reliability, RAID-DP technology isrequired.
SATA drives are twice as likely to fail.
Drive failures result in RAID reconstructions—twice as many SATAreconstructions.
Assuming five reconstructions per year, the use of RAID 5 promisesan almost 100% chance of data loss from bit error.
RAID-DP technology eliminates the bit-error risk.
System Reliability Event FC SATA
Typical Disk Drive Replacements (per year per 100 drives) 1 – 3 2 – 5
Bit Error Likelihood (per spindle) 0.2% 2.3%
Bit Error Likelihood – Single Pari ty (per reconstruction of an 8 -Drive RAID 4/5 Set) 1.6% 18.4%
Bit Error Likelihood – Dual Parity (per reconstruction of an 8-Drive RAID-DP® Set) < 1 in a billion
NetApp Confidential 68
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
DS4243: FLEXIBLE STORAGE FOR NETAPP UNIFIED STORAGE ARCHITECTURE
Industry-standard architecture that is based on Storage Bridge Bay (SBB) leverages several storage
connectivity technologies to provide flexibility for future deployments.
NetApp Unified Storage Architecture enables customers to choose not only the right protocol, right storage
tier, and right performance but also the right price-point to address their changing business needs.
Mature, second-generation, point-to-point SAS-based architecture enables high resiliency and fault
isolation and recovery. Frame array class resiliency, enabled by ACP, provides secure, out-of-band management communication
that is separate from the data path of disks.
Multiple redundant components are combined with a nondisruptive upgrade capability. NetApp RAID-DP technology (low-overhead, high-performance RAID 6) provides greater data
protection and capacity utilization than RAID 5 and RAID 1+0 technologies provide.
SATA requires two power sources, and SAS requires four power sources.
The DS4243 delivers greater density — 30% denser storage capacity: 24 drives in 4U (versus 14 drives in 3U
with DS14).
Greater Resiliency
ACP is an out-of-band management architecture that isolates management communication from the data path.The use of out-of-band management for disk subsystems has historically been found only in high-end, frame-array storage systems. In an out-of-band management implementation, disk health is monitored using acommunications path that is separate from the data path. To provide an out-of-band implementation, theDS4243 uses dedicated Ethernet ports for ACP.
With current FC-AL technologies, management communication and the data path often use the same wire.Therefore, certain classes of errors can hang the connection between the disk subsystem and the storage
controller. Incorporating out-of-band management capability with the SAS architecture helps to circumventthese types of error conditions. Point-to-point SAS technology isolates drive errors and prevents them from
bringing down an entire loop.
Greater BandwidthThe DS4243 uses “wide SAS ports.” The ports enable four data-communication paths, each of 3-Gbps SAS
bandwidth. Together, the ports can accommodate up to 12-Gbps bandwidth (compared to the 4-Gbps bandwidth that FC accommodates). Because few workloads push the bounds of the 4-Gbps bandwidth that the
DS14 provides, few workloads experience significant performance improvement from the SAS wide ports onthe DS4243. However, the ports provide investment protection. Future controller upgrades will be able to takeadvantage of the additional bandwidth.
Reduced Power Consumption Greater than 10% reduction in the number of watts consumed per TB of storage Power supplies that offer power efficiency greater than 80%
Advantages Provided by the DS4243
Greater density—30% denser storage capacity with 24 drives in 4U (versus 14drives in 3U with DS14)
When should customers consider the DS4243 for their installations?
Customers must ensure that a PCI-e I/O slot is available in the FAS/V controller for the SAS HBA.Availability of the slot is required for connectivity to the DS4243.
By using a SAS HBA in the FAS/V controller, customers can connect to the DS4243 disk shelf withFAS/V6030, FAS/V6040, FAS/V6070, FAS/V6080, FAS/V3170, FAS/V3160, FAS/V3140, FAS/V3070, and
FAS2050 systems. Customers can add DS4243 disk shelves to the systems that are installed in theirinfrastructure, provided a PCI-e I/O slot is available in the FAS/V controller for the SAS HBA.
MetroCluster configurations are not supported with DS4243. A DS4243 with the IOM3 modules uses SAS
cables that are limited in distance to 5m. Because MetroCluster configurations need to support distances up to100 km, they require FC connectivity. To address the distance limitation with SAS, NetApp will, in the
future, make available a FC-SAS bridge module for the DS4243 Meanwhile, customers should use DS14for their MetroCluster configurations.
Additionally, customers who require a DC-power solution must use DS14 configurations — at least until aDC-powered DS4243 becomes available.
Decision: DS4243 or DS14
Sell the DS4243 for:
FAS/V6000 systems
FAS/V3100 systems
New sales of FAS2040 andFAS2050 systems
New SA200, SA300, and SA600systems
Sell the DS14 for:
FAS/V6000, FAS3100, FAS3070,FAS3040, and FAS2050 systemswhen no PCIe slot is available
FAS2020 configurations withexternal expansion
MetroCluster capability
DC-power systems
Situations in which every DS14must be at least five meters awayfrom adjacent controllers or shelves
NOTE: Some DS14 EOA plans have been announced.NOTE: Except for the FAS2040 system, it is assumed that a PCIe slot is available for SAS HBA in the FAS/V controller .
NetApp Confidential 75
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
For an audience of IT directors and managers and technical contributors, these are the key points:
Small form-factor (2.5 inch) drives that make it possible to shrink a 24-drive shelf from 4U to 2U Doubled SAS interconnect bandwidth (to 6 Gbps)
Same-size SAS drives as with the DS4243, with slower (10,000 RPM) rotation and equivalent initial pricing
Approximately 20% lower IOPS per drive (OLTP workload) for 60% higher IOPS per rack unit A shelf that is as dependable as the DS4243, the most dependable NetApp shelf ever
The 15,000 RPM drives that are available in the 2.5-inch SFF are one-fourth to one-half the size of the 10,000
RPM drives and cost twice as much per GB. For this reason, NetApp offers only the 10,000 RPM drives inthe SFF.
DS2246: Greater Density and Speed
Twenty-four 2.5-inch small form-factor drives in only 2U rackspace
6-Gbps SAS interconnect and backplane
10,000 RPM SAS disk drives with a size of 450 GB or 600 GB
30% to 50% lower power consumption than with a DS4243 shelf
Same availability and resiliency features as provided by theDS4243 shelf
The DS2246 Disk Shelf
NetApp Confidential 76
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
For an audience of IT directors and managers and technical contributors, these are the key points:
SSDs are best suited for random, read-intensive workloads that require consistently fast response times SSDs are available in the DS4243, which houses 24 drives in 3.5-inch form-factor carriers.
Each shelf provides approximately 2 TB of raw capacity. For best results, this very fast media should be matched with a high-performance storage controller.
SSD requires four power sources.
Data ONTAP 8.0.1 or later is required. Supported systems: FAS and V-Series 3160, 3170, 3240, 3270, 6040,6080, 6210, 6240, and 6280.
Auto-tiering software not available. Use Flash Cache instead of SSDs when (a) workloads is random readintensive, (b) hot data is unknown or dynamic, and (c) an administration free approach is desired.
SSDs in a NetApp Disk Shelf
SSDs can provide consistently fast response times.
SSDs are supported in the highly reliable DS4243disk shelf.
Twenty-four 100-GB SSDs can be used per shelf.
SSDs are available with high performance NetAppFAS and V-Series storage controllers.
NetApp Confidential 77
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Dual redundant IOMs, which are standard for the DS4243, provide resilient multipath high availability(MPHA).
The industry-standard SBB-based architecture provides flexibility for futuredeployments and mature, stable connectivity architecture for disk enclosures.
IOM modules define the connectivity of the disk shelf.
IOM is the SAS equivalent of AT-FCx and ESH4 in DS14 disk shelves.
Dual redundant IOMs, which are standard per shelf, provide resilient multipathhigh availability (MPHA) connectivity
Each IOM contains two ACP ports and two SAS ports.
IOM3 provides 3-Gbps SAS connectivity on the DS4243.
IOM6 provides 6-Gbps SAS connectivity on the DS2246.
IOM3 and IOM6 are not interchangeable.
IOM Modules
NetApp Confidential 82
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Flash Cache modules are PCIe cards that provide enterprise-class, single-level cell (SLC) flash memory and
custom memory-management units. The cards fit into the expansion slots of a storage controller.
NetApp Flash Cache modules are intelligent-read caches that contain user data and NetApp metadata. The
word “intelligent” is used because what is cached is determined by which of three modes of operation isselected. For detailed information about operation nodes, see the Technical FAQ.
Active data flows automatically into the cache and all storage behind the controller is subject to caching.
When the disk subsystem (not the CPU) is the obstacle, the traditional way to increase I/O throughput is toadd disks. If additional capacity is not needed, the addition of disks wastes storage. With caching, the storage
system’s I/O throughput is increased without the addition of disks.
This caching approach is effective for workloads that are random in character and read-intensive. Examples
include file services, OLTP databases, messaging, and virtual infrastructure.
With NetApp Flash Cache modules, results can be predicted. A Data ONTAP 7.3 and later feature called“Predictive Cache Statistics” simulates the presence of a cache under the workload. The feature can predict
whether adding cache would be helpful and how much cache should be added.
NetApp Flash Cache: Advantages
Optimize Performance and Reduce Cost
Improve average latency for random reads
Increase I/O throughput of disk-bound storagesystems without adding disk drives
Reduce cost by using fewer, larger disk drives
Effectively service file services, databases,messaging,and virtual infrastructure
Predict your results before buyingfor an existing storage system
NetApp Confidential 84
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
For customers who need new storage systems, position Flash Cache against disks. Flash Cache providesmultiple ways to increase performance and decrease cost.
Many storage systems are configured with a large number of high-performance disks to provide adequate read
I/O throughput. As a result, storage capacity, power, and rack space is wasted.
With Flash Cache in the configuration, disks provide the capacity and some of the I/O throughput (IOPs).Flash Cache provides additional IOPs and faster response times. Eliminating unneeded disks can reduce the
purchase price of a system and provide ongoing power and rack-space savings.
Flash Cache can be combined with SATA drives to maximize capacity, minimize the number of disks, andobtain good performance.
Configure Only withFC or SAS Disks Additional disk drives
providing IOPs
Inefficient use ofstorage capacity,power, and space
+Flash
Cache
Configure with SATADisks and Flash Cache
More storage capacity
Provision of an IOPs boostfor SATA drives
Cost savings for storage,power, and space
+Flash
Cache
Configure with FC or SASDisks and Flash Cache
Disks provide capacity andIOPs
Flash Cache provides IOPsand reduces latency.
Storage, power, and space
costs are reduced.
Flash Cache: the Optimum ConfigurationHow to Increase Performance and Decrease Cost
NetApp Confidential 85
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
REDUCING THE DURATION OF BOOT STORMS IN A VIRTUAL INFRASTRUCTURE
With the unique ability of NetApp to combine deduplication of primary storage with intelligent caching, theduration of boot storms within a virtual infrastructure can be reduced.
NOTE: The same effect is realized when FlexClone software is used with Flash Cache.
Flash Cache is deduplication-aware. That is, Flash Cache caches a deduplicated block only once and satisfies
read requests for all corresponding virtual blocks from the cache at least 10 times faster than going to disk.
A NetApp partner, Corporate Technologies Inc., published test results showing that, when NetAppdeduplication and intelligent caching were used together, the duration of a boot storm was reduced from 15minutes to 4 minutes The original Performance Acceleration Module, precursor to Flash Cache, was used inthis test. Here is a link to a blog with the test results:
Set realistic expectations, as to when Flash Cache will help and when it will not help.
Using Flash Cache helps with many workloads but not with all workloads. Read caching is most effective forsmall-block, random read-intensive workloads.
Flash Cache is not significantly helpful for sequential or write-intensive workloads or for CPU-based problems.
Setting Expectations
Effective with Random-Read Workloads
Databases
File services VMware, Hyper-V, and Citrix
Microsoft Exchange and SharePoint
Engineering and software development
NetApp Confidential 87
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
In regard to fragmentation, NetApp is often criticized. However, all random-access storage mediums
fragment.
NetApp can argue that, because of the WAFL file system, NetApp’s fragmentation problem is smaller than
most vendors’ fragmentation problem. The NetApp system determines where blocks are written and lays out blocks in the most efficient way. WAFL lays out complete stripes much more often than other vendors’systems do.
Certainly, fragmentation still happens, because the systems of both NetApp and NetApp’s competitors deletestripes from and open holes in stripes. However, competitors create stripes that contain holes. So, NetApp’s
issue and competitors’ issues are very different.
To fix the fragmentation that is bound to occur, the NetApp system enables the reallocation of blocks —
rewriting data and arranging it in clean stripes on the storage system. Because the stripes are laid outsequentially, sequential read-performance problems are reduced.
NOTE: Reallocation works by rewriting files; it cannot move data that is locked in Snapshot copies.Therefore, rewrites of blocks for a file in a Snapshot copy look like a delta. They require more storage spaceand increase the size of the delta on SnapVault or SnapMirror relationships.
NetApp recommends against running a full reallocation. Because the system assumes that all data is new, a
100% delta is created in the next Snapshot copy. In essence, the process creates a new baseline for thereplication relationship — and thus creates a need for 100% more storage space. A full reallocation should be
considered only for volumes that are less than 50% full.
Fragmentation Management: Reallocation
Available in Data ONTAP 7.0 and later, running in thebackground at non-busy times
Useful for
– Improving spatial locality of files and LUNS
– Solving sequential read-performance problems
Requiring these cautions
– Reallocation rewrites files.
– You cannot move data that is locked into Snapshotcopies.
– If Snapshot copies are present, sufficient free space isrequired.
– Rewritten data is changed data, and SnapMirrorsoftware moves the changed blocks.
NetApp Confidential 90
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Typically, a user tells the system what to reallocate and when to reallocate. A low-priority process runs in the
background when the system isn’t busy, and if the system becomes busy, the process stops. In this case, smallamounts of data are reallocated every day.
If small amounts of data are reallocated each day, the system creates small deltas — never creating a largedelta. If reallocation is turned on at the beginning of the creation of a FlexVol volume, the reallocation
process can be controlled — never creating a big delta in a Snapshot copy.
No other RAID reallocation tool performs like NetApp’s reallocation process. In most competitors’environments, to lay out data in a clean form, a migration must be performed. NetApp reallocation can be
performed live, with little effect on system performance.
Another example of when reallocation is useful is the addition of disks. Assume that an aggregate contains 16
disks and 1 RAID group and that you decide to add a RAID group of 16 disks. Immediately after the addition,data resides on only the original 16 disks (thus on half of the disks). To spread the data across all 32 disks,you must reallocate. If you do not reallocate, new writes span all 32 drives, but the original data resides on
only the original 16 disks.
In the future, NetApp intends to enable the reallocation process to move Snapshot blocks. Moving Snapshot
blocks is a complex process. For example, a block held in a Snapshot copy might have 25 pointers to it, each
pointer located on different inode map. Therefore, the move process requires not only the physical movementof the block but also a cascade of inode updates. This intensive operation will probably be available as an
option.
Windows hole punching is similar to reallocation. An issue in regard to LUNs is the inability to identify when
a deletion has occurred. The SCSI command set does not include a delete command. A “delete” is a code
statement that says “this inode is free.” But such blocks can be identified only through NTFS.
Reallocation: How, When, and What
Full Reallocation, Defragmented
DataDataData Data Data
NetApp Confidential 91
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
With the hole-punching feature, blocks that are being freed are identified. Once identified, the WAFL systemcan be used to free the blocks. As NetApp engineers gain more understanding of the file systems that arecontained within a LUN, NetApp will offer more features and improve upon its current features. For example,the reallocation process will be more successful now that the blocks that are used within a LUN are
distinguished from the blocks that are not used within the LUN.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
For some configurations, trade-offs between amount of memory and number of disks are required.
In a highly competitive deal, you may be tempted to tweak the performance setup. However, we recommendthat you do not do so. In most cases, you should focus on high disk performance and not try to balance
memory and disks against each other.
For FlexVol considerations, we are pushing toward flexible volumes and aggregates. In almost all
environments, performance of all of the disks and creation of a big I/O pool are huge wins. There are somesmall trade-offs. For example, FlexVol volumes have additional metadata needs, so they have additionalmemory needs. But, those needs are quite small compared to the gain that is achieved by creating large pools
of IOPS, the flexibility that gained by cloning features, and so on.
Relationship Between Memory and Disks
For some configurations, performance tradeoffs arerequired.
– With more memory, fewer disks may be needed; with
less memory, more disks may be needed. – Slower disks need more memory; faster disks need less
memory.
When considering the memory-disk question, youshould
– Consider the customer’s applications and architecture
– Understand the customer’s environment
– Use NetApp whitepapers and sizing tools
NetApp Confidential 93
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
CIFS is not a high-performance protocol, but tens of thousands of CIFS users create a high-performance load.
Consolidation of CIFS users provides a great opportunity. Administrators appreciate the benefits. And, theyexpect consolidation to produce fantastic performance.
Be aware of and careful about anti-virus needs and advanced CIFS features. One such CIFS feature is SMBserver signing. This feature was introduced in one of the service packs of 2000 and is included in 2003. If twosystems are enabled for SMB server signing (enabled by default on Microsoft systems), the signing occursautomatically. An MD5 signature is added to every packet that is transmitted between the two systems. Theaddition of the MD5 signatures adds a huge load to the CPU.
NetApp supports SMB server signing. However, if it is turned on, the CPU will probably peg and performance will decrease. At this time, there is little demand for SMB signing. If customers begin to demand
SMB signing, we will qualify an MD5 offload card and off load all of the calculations to a daughterboard toreduce the impact on the CPU.
Be aware of the impact on performance. Make sure that customers understand what CIFS is doing, as MD5s
are complicated and can impair performance on the client side. MD5 signing is not only a server feature.Other features to be aware of are quotas and oplocks.
Consolidated CIFS environments are expected to be high performance, so you should ensure that very large
CIFS environments are sized appropriately.
CIFS Performance
CIFS is not a high-performance protocol. Each connection has low-performance demand, but tens of thousands of CIFS users produce ahigh-performance load.
Consolidation of CIFS users requires that you
– Consider and size carefully.
– Use the home directory sizing guide
– Use the Custom Application Sizing tool
– When possible, correlate CIFS usage with statistics collection—very powerful
You must be aware of
– Anti-virus needs
– Advanced CIFS features (for example, SMB signing, quotas, and oplocks)
With consolidation, CIFS environments are expected to be high performance, so theenvironments require careful attention.
NetApp Confidential 94
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The choice between iSCSI and FC is often a question of business, politics, or philosophy — rarely a technical
question. NetApp systems work effectively and efficiently with both iSCSI and FC, so whatever the customer prefers is the right choice.
Typically, customers who already have FC choose FC, because they want to leverage the environment thatthat have. Typically, customers who do not have FC choose iSCSI, because starting a new fabric requires alarge investment.
The only performance caveats concern software initiators. Software initiators require more CPU load.Bandwidth aggregation makes iSCSI relatively competitive with FC. For most cases, iSCSI performance is
similar to FC performance.
iSCSI Versus FC
The choice is often a business, political, or philosophical choice—not atechnical choice.
When technical factors are considered
– Customers who have FC tend to prefer FC
– Customers who don’t have FC tend to prefer iSCSI
These factors affect performance.
– iSCSI software is easy and cheap.
– iSCSI uses more CPU (on host and storage)—often not an issue.
– iSCSI hardware has a typical NIC cost; CPU consumption is less thansoftware
– In regard to bandwidth, an FC wire is typically two times an Ethernet wire;however, this fact is rarely an issue (just use multiple wires).
For most cases, iSCSI performance is similar to FC performance, soperformance considerations determine the choice.
NetApp Confidential 95
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This diagram illustrates the concept of back-end and front-end LUNs. Basically, the storage-array LUNs that
use the array vendor’s RAID grouping are moved into a Data ONTAP aggregate.
After an aggregate is established, provisioning is performed as it is within any NetApp storage system and
front-end LUNs, either iSCSI or FC, are then established within the FlexVol environment.
The underlying storage system presents LUNs to the V-Series system. Then the LUNs are used as if they
were disks. The options are to configure the system so that it presents one massive LUN or to obtain multipleLUNs and do RAID zero across them from within the V-Series system. You see only RAID zero on live
NetApp systems in V-Series systems.
V-Series Logical Topology
Aggregate
Storage Array LUN
Disk RAIDGroup
Storage ArrayBack End
V-SeriesFront End
FC and iSCSI LUNs
Storage Array LUN
Disk RAIDGroup
Storage Array LUN
Disk RAIDGroup
FlexVol Volumes
NetApp Confidential 98
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
It is very common, in a traditional environment, to require that different architectures be deployed from
different vendors, depending on the size of the environment and depending on the protocols that are involved.This requirement produces management complexity, and complexity increases cost. NetApp competitors offerno single solution for managing these various arrays and configurations.
The V-Series system, running the Data ONTAP multiprotocol architecture, addresses the management problem that is created by having islands of storage configuration. If a customer is running the environmentthat is depicted in the illustration, NetApp’s approach is appealing, because the NetApp approach pr ovides forcentralized management configurations that enable the presentation of a consolidated view.
The TraditionalHeterogeneous Storage Model
NetApp Confidential 99
Vendor A Vendor B Vendor C
Departmental
NAS
LANFC
Enterprise
SAN
iSCSI
Departmental
SAN
Enterprise
NAS
Ethernet
Each vendor’s environment has a unique management interface
and data-management suite.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
V-Series systems provide the only heterogeneous storage solution that unifies NAS, SAN, and IP-SAN under
one storage architecture. Instead of using the diverse architectures that were previously required to manageheterogeneous environments, NetApp’s customers use one management interface and one set of software torun everything. Obviously, NetApp provides its customers a much simpler, much more cost-effectivesolution.
ASK YOUR CUSTOMERS WHAT THEY WOULD DO IF THEY COULD DO THEFOLLOWING
Here is a script that you can use for presenting the advantages of NetApp solutions to customers:
“I’d like to ask you a few questions to help me understand your business drivers. Perhaps some of these willresonate with the challenges that you face. What if you could buy 50% less storage in your virtualizedenvironment by using time-proven, industry-leading storage-efficiency technologies and best-practiceimplementation? We guarantee that you’ll need 50% less. Moreover, we guarantee that you’ll need 35% lessstorage even if you continue to use your existing storage assets.
“What if you could cut total IT spending in half? We did this for Sensis, an Australian provider of onlineinformation services, with an IP storage network and creation of best practices for storage administration.What if you could continue to grow but avoid having to build a new data center? We did this for ThomsonReuters — in fact, we helped Thomson Reuters to defer investing in three new data centers and for BritishTelcomBT. We also did it for ourselves. Are you interested in delivering power savings to your business thatwill help you to fulfill new environmental responsibility objectives and meet emerging data-center regulatory pressures?
“We can help you to speed up time-consuming processes like provisioning and backups that inhibit agilityand expose you to risk while delivering storage efficiency that will help you to provision and back upaccording to an extremely competitive business model.
“Do you plan to deliver IT as a service, either through your enterprise cloud or by outsourcing? We can helpyou with either approach. NetApp provides storage and data management for the leading “as -a-service” providers in the market today. Providers of storage-as-a-service, software-as-a-service, infrastructure-as-a-service, and platform-as-a-service choose NetApp to support their market offerings.
Ask Your Customers What They Would
Do If They Could Do the Following
Use 50% less storage
Cut IT expenses by half
Avoid building a new
data center
Reduce data-center
power and cooling loads
Speed IT response to
business needs
Accelerate time to market
Deliver IT as a service
NetApp Confidential 4
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
“They choose NetApp because of the flexible, unified architecture, broad data protection and retention, and business-continuity solutions that when combined with storage efficiency allow them to offer the mostcomplete and responsive services in a compelling pricing model. Services from Oracle, SAP, Facebook, Navitaire, T-Systems, Siemens, BT, Iron Mountain, and the world’s most popular online music service all
have NetApp at the heart of their offerings.”
What are your priorities? Dialog with customers about their businesses, challenges, and goals. These are thechallenges and opportunities with which we help customers around the world.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
RAID 6 (RAID-DP technology) protects against double-disk failure without sacrificing performance oradding disk-mirroring overhead.
Thin provisioning (FlexVol technology) keeps a common pool of storage readily available to allapplications.
Thin replication (SnapVault and SnapMirror software) enables block-level, incremental data backup andreplication for significant storage and bandwidth savings.
Snapshot copies provide instant, point-in-time data copies with minimal storage Snapshot space. Virtual copies (FlexClone volumes) use virtual cloning to create on-demand, space-efficient virtual clones
of volumes, LUNs, and individual files. Deduplication across applications and protocols identifies, validates, and removes redundant data blocks
from volumes for up to 95% disk savings. Data compression is performed inline and immediately reduces the amount of stored data.
Software Efficiencies
Deduplication removes
data redundancies in primary
and secondary storage.
Saveup to95%
Data compression reduces
the footprint of primary and
secondary storage.
Saveup to87%
Thin provisioning (FlexVol
technology) creates flexible
volumes that appear to be a
certain size but are a much
smaller pool.
RAID 6 protection (RAID-DP
technology) helps to protect
against double-disk failure
with little performance penalty.Saveup to46%
Saveup to33%
Thin replication (SnapVault
and SnapMirror software)
makes data copies for disaster
recovery and backup and uses
a minimal amount of space.
Saveup to95%
Snapshot copies are point-in-time copies that write onlychanged blocks and withminimal performance penalty.
Virtual copies (FlexClonevolumes) are near-zero-space,
instant, “virtual” copies. Only
subsequent changes in the
cloned dataset get stored.
Saveover 80%
Saveover 80%
NetApp Confidential 5
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Explain the use of large-capacity SATA drives in enterprise applications.
Flash Cache is the current brand name for PAM II, the next-generation card that replaces the originalPerformance Acceleration Module (PAM).
The use of 144-GB FC drives instead of 1-TB SATA drives results in seven times more capacity.
Because SATA drives store much more data per disk, resiliency is important. RAID-DP technology providesthis resiliency but without the capacity overhead of disk mirroring.
The new PAM can effectively increase the read performance of SATA drives, which allows you to use SATAdrives in more applications.
The combination of SATA drives, RAID-DP technology, and PAM radically changes what constitutes “high- performance” storage.
Hardware EfficienciesUsing High-Density Disk Drives
High-performance storage utilizes:
– SATA high-density disk drives
– RAID-DP technology
– Flash Cache
The net effect is:
– Six times higher density per watt
– Three to seven times higher capacity per
rack, for example,144-GB FC versus 1-TB SATA
– An increase in both storage efficiency and
performance
D P PD D D D D D D D D D
NetApp Confidential 6
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This figure shows the effect of NetApp FlexClone technology on storage efficiency. You can:
Create a virtual “clone” copy of the primary dataset Choose to store only data changes between parent volume and clone
Quickly create copies of production data to test product lifecycle management ( PLM) software upgrades before deployment
Quickly create “sandbox” environments for test and QA
FlexClone copies are invaluable in testing and development environments.
Instead of provisioning a large amount of storage capacity to perform application testing, the productionapplication data is “shared” with the test data, which results in extreme efficiency.
FlexClone Virtual Clones
NetApp Confidential 7
ProductionStorage
Test andDevelopment Storage
6-TBDatabase
WithFlexCloneSoftware
8 TB of Storage,1 Copy, and 4 Clones
6-TBDatabase
30 TB of Storage and5 Full Copies
WithoutFlexCloneSoftware
Gold Copy
Gold Copy
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
FAS Deduplication is a general purpose space reduction feature available on FAS systems. When FASdeduplication is enabled, all data in the specified flexible volume can be scanned at intervals and duplicate blocks removed, resulting in reclaimed disk space. NOTE: FAS deduplication is not supported on V-Series,R100, R150, FAS250 and FAS270 as well as the 800 and 900 series controllers..
FAS deduplication runs as a background process, and the system can perform any other operation during this process. FAS deduplication is a post-processing task, and is performed on a volume at an average rate of 30-50MB/sec (108-180GB/hour). Up to eight volumes can be deduplicated simultaneously. It is important to notethat although the deduplication process runs as a low priority background task, deduplicating eight volumessimultaneously will place significant load on the system.
NetApp Deduplication Overview
Proven technology:
Over 30,000 systems licensed for deduplication
Design for enterprise storage:
– Integrated tightly with Data ONTAP software
– Available on FAS and V-Series systems
– Suitable for primary, backup, and archival storage tiers
Broad customer platform options:
Multiple platforms that scale in capacity, performance, and price
Deduplication storage efficiencies for reduced costs:
– Reduce physical data storage costs
– Reduce space, power, and cooling costs
– Store more data per physical storage system
NetApp Confidential 12
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
DEDUPLICATION OF STORAGE THAT IS NOT NETAPP STORAGE
This slide shows the V-Series architecture, which virtualizes and pools heterogeneous storage at the back end.The V-Series controller is a multiprotocol controller that provides both NAS and SAN capabilities. No othervendor provides a single controller that can serve NFS, HTTP, and CIFS protocol and FC and iSCSI forLUNs.
V-Series architecture delivers centralized management for provisioning, disaster recovery, backup recovery,compliance, and retention across heterogeneous storage at the back end.
Deduplication of Storage That Is Not
NetApp Storage
iSCSIFC
DepartmentalEnterprise
SAN
Enterprise
NAS
Departmental
Disk RAID
Group
Aggregate
Vol
CIFS NFS
NetApp Deduplication
NetApp V-Series Systems
LAN
NetApp Confidential 13
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Storage systems use “reference pointers” to read and write data. That’s necessary so that users can find data
after they’ve written it.
Look at the four data blocks on the bottom of the graphic. The two green blocks indicate that the data is the
same. By eliminating the redundant block and referencing the data pointer to the original block, users caneffectively make the bottom right block free space that is available back to the storage system. That isfundamentally how deduplication works from a data structure standpoint.
Deduplication consists of two major components: WAFL (Write Anywhere File Layout) block sharing andfinding common blocks.
A reference count metafile keeps track of how many times a given block appears in qtrees in the active filesystem. In effect, this is an array that is indexed by VVBN. The size of each entry is 16 bits (8 bits used) so
that it requires 0.5 GB per TB of volume space. The maximum sharing for a block is 256.
How Does Deduplication Work?
Storage systems use inodes and reference pointers to
read and write data.
NetApp deduplication uses multiple pointers to reference a
single block. The same basic technology has been used in NetApp
Snapshot copies for over 15 years.
INODE 1 INODE 2
Indirect
Block
Indirect
Block
Indirect
Block
Indirect
Block
DATA DATA DATADATA
NetApp Confidential 14
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The following are highlights of what NetApp deduplication implementation achieves.
Key Messages:
Users can find and remove 4K WAFL blocks.
The most popular configuration is to schedule at slow times. (Other configurations are threshold, manual,and through the SnapVault scheduler.)
Because this occurs at a low level, it is transparent to applications. In addition, NetApp can accommodate any interface that is supported by FAS systems.
Deduplication for FAS systems runs as a low-priority background process.
How Does Deduplication Work?
Removes duplicate 4K WAFL blocks
Uses a postprocess that can be scheduled
Is transparent to applications Supports any interface or protocol
Is a low-priority background process
NetApp Confidential 15
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Explain the effect of NetApp deduplication on storage efficiency.
Deduplication searches for and removes duplicate data.
NetApp is the market leader in deduplication, and thousands of customers use deduplication in production.
NetApp deduplication is different in that it can be applied to a broad variety of applications and storage tiers,including primary storage, replicated storage, backup storage, and archival storage.
NetApp Deduplication
Over 20,000 systems utilize
NetApp deduplication.
Deduplication removes
redundant data blocks fromvolumes, regardless of
application or protocol.
With deduplication, users can
recoup 50% of their capacity
on average and up to 95% for
some datasets and
environments.
Only NetApp offers
deduplication for primary,
secondary, and archival
storage tiers.
NetApp Confidential 16
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Adding deduplication to a NetApp storage environment is a 10-minute process: Add the licenses, enable thevolumes to be deduplicated, and schedule the deduplication to run at specified intervals.
Inside NetApp Deduplication
Deduplication removes duplicate WAFL blocks:
– There is no charge to add the deduplication license
– Is enabled volume by volume
– Includes 4K block-level deduplication
Is for any interface or protocol:
– CIFS, NFS, FC, iSCSI, and NDMP
Is application-transparent:
– Is content-agnostic
Requires minimal overhead:
– Write performance overhead is approximately 7%.
– Read performance overhead is approximately 0%.
Includes these features that were released in January 2009:
– Larger volume sizes
– Checkpoint restart
– Performance improvements
NetApp Confidential 17
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The presenter must understand that the top part of this slide doesn’t discuss deduplication; it refers to backup.
Because of Snapshot copies, users don’t copy data when they take a backup. Only the changed blocks aremoved. The bottom example is the more common view and understanding of deduplication.
NetApp has changed the rules of deduplication. No longer only for backup data, NetApp deduplication provides space savings across all storage tiers: primary, backup, and archival data.
Benefits of NetApp Deduplication
Backup data:
– NetApp deduplication removes
redundant data from backups.
– Savings are displayed as a
ratio, for example, “20:1 spacesavings after 30 backups.”
Nonbackup data:
– NetApp deduplication removes
redundant data from a single
volume.
– Savings are displayed as a
percentage of the total volume,
for example, “User data reduced
by 30% after deduplication.”
Time-Based Deduplication Actual
Storage
ConsumedBackup 1
Backup 2
Backup 3Backup 4
Original DataDeduplicated Data
New Data
Volume-Based Deduplication
Original Data
Volume
Duplicates
Identified and
Removed
Actual
Storage
Consumed
NetApp Confidential 18
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Data ONTAP 8.0.1 7-ModeWritten agreement (policy-variance request, or PVR) to control use cases (more details later in this course)Free licenses (deduplication and compression)
Deduplication licensed and enabled on volume (but not necessarily scheduled)Only 64-bit aggregatesA maximum volume size of 16 TB
Data ONTAP 8.0.1 7-Mode: It supports only 7-Mode (not Cluster-Mode) configurations.
Deduplication NetApp data compression requires deduplication to be enabled on the same volume. After you enablededuplication, you can choose to enable data compression. You do not need to schedule deduplication to run;you only have to enable it on the same volume.
Free license NetApp data compression requires both the deduplication and the compression license. Both are free.
64-bit aggregates NetApp data compression does not support 32-bit aggregates. No plans exist for supporting 32-bit aggregates.
Is enabled per FlexVol volumeWorks on FlexVol volumes only, not on traditional volumes
Limits maximum volume sizeLimits volume (same as deduplication). For Data ONTAP 8.0.1, the limit is 16 TB for all supported storagesystems.
NetApp Data CompressionArchitecture Requirements
Data ONTAP 8.0.1 and later versions:
– Is available for 7-Mode and Cluster-Mode
Installation of free licenses:
– Compression
– Deduplication
FlexVol software requirements:
– A 64-bit aggregate
– A maximum volume size of 16 TB
– Deduplication enabled (does not need to be run)
– Compression enabled per FlexVol volume
NetApp Confidential 24
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
When you have compression and deduplication enabled on a FlexVol volume, you get immediate spacesavings with compression and cumulative savings with postprocess deduplication. The total savings are notnecessarily the sum of the individual savings. Refer to When to Select Deduplication and the Compression
Best Practice Guide for more details.
Compression can reduce the footprint of the initial data that is written to disk, and deduplication removesduplicate WAFL blocks.
Compression and Deduplication
Deduplication
Immediate space savings with inline compression
Cumulative space savings with postprocessing
deduplication
Raw Data
Compressed
Data
Compressed and
Deduplicated Data
Inline Postprocess
NetApp Confidential 25
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
These are sample savings that NetApp has achieved with internal and customer testing. Actual customersavings are highly dependent on the data type and data layout. It is highly recommended that you test youractual data with both the Space Savings Estimation Tool ( SSET) and in a test and development environment.
For Data ONTAP 8.0.1, NetApp recommends running only nonperformance-sensitive applications such as
File Services and IT infrastructure on the primary storage infrastructure. These other data types may be goodtargets for backup and archive tiers.
Compression and Deduplication SavingsRepresentative Savings by Application
Dataset TypeCompression
Savings
Deduplication
Savings
Combined
Savings
NetApp
Recommendation
Primary and secondary
Geoseismic files 40% 3% 40% Compression
Engineering data files 55% 30% 75% Both
Virtual servers 55% 70% 70% Deduplication
Home directories 50% 30% 65% Both
Backup and archive only
Database and Biz Apps 65% 0% 65% Compression
Exchange 2010 35% 18% 37% Both*
Exchange 2003 37% 3% 38% Compression
These are typical space savings; actual results may vary. Use the Space Savings Estimation Tool (SSET) v3.0.
*Exchange 2010: deduplication for primary; compression for backup and archive
NetApp Confidential 26
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
For primary storage the uses cases will be limited to those that are not performance critical, typicallyapplications that run on SATA drives or NAS such as File Services, Engineering Applications and Seismicdata.
Compression and Deduplication
Deduplication, NetApp is the established market leader over 87,000+ licenses and over one exabyte worth of
data being deduplicated
Deduplication and compression utilize the ONTAP architecture
Advanced deduplication increases the amount of applications that we can offer space savings for
Only NetApp offers deduplication and compression for primary, secondary, and archival storage tiers
Provides immediate space savings with compression and additional space savings with post-processdeduplication
Users often recoup 50% or more of their capacity
Eliminates need for off box solution
Primary Storage Use Cases
Customers who:
Look for ways to reduce primary storage
consumption
Do not want to run a third-party compression
solution
Want to achieve low deduplication savings
Have applications that are not performance-
critical:
– File Services
– Engineering applications
– Seismic data
NetApp Confidential 27
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
For secondary storage, the use cases are much less limited and may include applications such as File Services,databases, and Exchange. Backup and archive solutions that perform compression do not benefit much fromdata compression. These customers may choose to disable the compression feature on their backup andarchive solutions to take offline the resource overhead that is caused from compression. Note that enablingcompression on a backup and archive tier increases the time that it takes to complete backup. NetApprecommends that you test this in your environment before implementation.
Secondary and Archive
Storage Use Cases
Customers who:
Look for ways to reduce backup and archive
storage consumption
Have backup and archive jobs that consume too
much disk space
Cannot store enough backups because of space
constraints
Want to use SnapVaultsoftware or another backup
solution
Look for ways to reduce the cost of backup and
archive tiers
NetApp Confidential 28
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Storage efficiency translates to environment savings.
Power, cooling, and data-center space are increasingly important to NetApp customers. NetApp consistentlyoutperforms EMC and HP in environmental impact. Details can be found in the Oliver Wyman study that isnoted on this slide.
Most NetApp customers face increasing pressure on space, power, and cooling in their data centers. Some are
running out of space. Others can’t get new power from their utility. Some customers find that today’s highdensities have maxed out their cooling infrastructure.
With governments around the world scrutinizing data-center power consumption and the increasing global pressure for environmental stewardship, the power, cooling, and space benefits of NetApp solutions can helpcustomers to directly address those concerns and challenges.
NetApp solutions can cut footprint, power, and cooling loads by half. Most NetApp customers see extremelyfavorable ROIs, often paying themselves back in under a year. BT’s ROI was eight months. Its annual powersavings alone were $2.4M.
NetApp has a host of products that can help customers to get more from their systems, which eliminates theneed for wholesale changes, extends the life of some of their infrastructure that they may not be ready toreplace, and enables customers to extend their mixed-vendor storage arrays with NetApp capabilities without
having to take those systems out of production.
NetApp Uses 50% Less Power,
Cooling, and Space
NetApp Confidential 29
Possible range based on environment-specific factors and typical environments
Half the Power
Half the Cooling Load
Heat (BTU per Hour)/Usable TB
51%
NetApp Competit ion
NetApp Competition
Power (VA) and Usable TB
52%
300
200
100
0
900
600
300
0
Half the Space
NetApp Competition
Total rack units /10 TB
53%
20
10
5
0
15
Source: Oliver Wyman Study: “Making Green IT a Reality,” November 2007.Competitors: EMC CLARiiON and HP EVA.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
infrastructure, help customers to meet backup requirements, and help customers to recover rapidly whenneeded. And, by storing only copies of changed data, NetApp delivers protection that helps customers toaffordably protect their data, their businesses, and their reputations.
NetApp disk-based backup solutions can be used in combination with NetApp mirroring and replicationtechnologies to provide the most cost-effective business-continuance solutions that are available today. Testedand proven for complex environments, NetApp technology allows customers to mirror data to a remote siteand delivers records of changes at any interval that the customers choose. When a failure occurs, a customercan retrieve the desired copy of the data instantaneously and quickly resume business without disruption ofservice.
NetApp software-based archiving and compliance solutions are unique in the industry. Not only do thesesolutions totally eliminate the cost of redundant compliance storage by using a single copy for both backup
and compliance, they also eliminate the need for dedicated compliance systems. Only the NetApp unifiedarchitecture lets customers consolidate e-mail file, database, enterprise resource planning ( ERP), and CMSdata on a single platform.
NetApp also provides industry-leading information classification and management, which enables datadiscovery to mitigate litigation and compliance risks and enables efficient management of storage tiers tolower the cost of archival storage.
Unified Protection for the
Entire DataCenter Environment
Continuous availability:
six nines of uptime
Disaster recovery for
complete site protection Backup and recovery for
Snapshot copies and for tape
environments
Archive and compliance for
long-term retention and
ongoing access
Security to encrypt data while
the system is up and during
scheduled downtime
NetApp
Primary Storage
Other Vendors’Storage ThroughVirtualization
NetApp Confidential 30
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The Efficient IT calculator has been enhanced and now quantifies savings when using NetApp dedupe forFAS, VTL and primary and archival data sets. Tool users will see their personalized reports showing howmuch money, space and time they can save when using deduplication.
This tool can be found on the Field Portal.
The Efficient IT Calculator (1 of 2)
NetApp Confidential 32
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This tool is a confidential NetApp product. It is intended for use only by NetApp employees and authorized NetApp partners when analyzing data at current or prospective NetApp customer accounts. By installing thissoftware, you agree to keep the tool and its results confidential to NetApp, the NetApp authorized partner, andthe customer account.
Overview:
FAS deduplication is a NetApp storage space-saving technology that increases stored data efficiency by
deduplicating and storing only unique data.The SSET for Linux crawls through all the files in the specified path and estimates the space savings that will be achieved by FAS deduplication.NOTE: This tool reports the percentage of duplicate data that is found in the file system and not the amountof data that is actually saved by enabling FAS deduplication. The tool is for estimation only.
The SSET
This tool is available on the NetApp Field Portal.
NetApp Confidential 34
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Version 3.0 of the Space Savings Estimation Tool provides support for three configurations:
Deduplication only Compression only
Combined savings, compression followed by deduplication
SSET scans local or CIFS-mapped or NFS-mounted file systems only. It can analyze data from any source, in
other words it does not require the data to be on NetApp storage. It can be run from either a Windows orLinux machine. This tool is limited to evaluating a maximum of 2TB of data. If the path contains more than2TB of data, the tool will indicate the maximum size has been reached and present the results for the 2TB ofdata that has processed.
Currently the tool is only available to NetApp field and partner personnel. They can run it at the customer site but must remove it when they are finished testing. This tool cannot be left with the customer.
SSET 3.0 is currently available by request only to Sandra Moulton, but will be released with 8.0.1 to the field portal.
The SSET 3.0
Analyzes existing data
Predicts savings for:
– Deduplication only – Compression only
– Both ( compression followed by deduplication)
Is run from Windows or Linux clients with read
access to data
Does not require data to be on NetApp
storage
NetApp Confidential 35
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
1. Set up your analysis. NetApp enters your information and reviews default assumptions for your financial environment.
2. Identify the existing storage environment. NetApp works with you to identify data requirements and existing storage technology.
3. Propose solutions.Propose storage solutions that NetApp believes are appropriate alternatives to existing solutions.
4. Analyze and compare. NetApp Realize analyzes the financial impact of proposed solutions and shows savings and benefitscompared to the existing system.
5. Present results.Use the NetApp Realize outputs as a summary to allow you to take the next step.
Creating an Analysis
Identify customer and set
up new analysis
Edit default financial values
Identify customer’sdata requirements and
existing storage
Propose one or more
solutions to replace existing
storage
Calculate the cost
savings of your proposed
solutions, ROI, and payback
period
Present results
NetApp Confidential 37
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Having the correct tools is probably the single most important factor in being efficient and productive whenyou perform service-delivery work. For every delivery hour that you can save, you increase your margins,reduce risk, and improve customer satisfaction.
NetApp has worked hard to ensure that the NetApp services teams have high-quality, rich-featured tools. As part of the NetApp partner program, to ensure your success, NetApp has extended those tools to you for youruse and benefit.
NetApp Synergy is a suite of applications that can assist you with pre-sales through post-sales activities. NetApp has more than a dozen applications that you can choose from to meet your specific needs, but today,this course focuses on Storage Design Studio (SDS).
NetApp Learning Center training courses are available for this tool. See the NetApp Learning Center foravailability.
NetApp Synergy: a Suite of Applications
http://synergy.netapp.com
NetApp Confidential 41
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
After your proposed solution is agreed upon and the deal is closed, it is critical that you deploy thecomponents with accuracy and efficiency. SDS helps you to ensure that all your deployments proceed asquickly, accurately, and effectively as possible.
SDS is an application plug-in that offers fine-grain configuration, automated storage provisioning, andintegrated Word and Visio documentation.
Design, configure, and document
Storage Design Studio
An application that:
Enables you to build accurate, detailed
configurations of NetApp storage controllers
Provides the capability to rapidly design a
solution, make changes to the design, and
see results immediately
Generates configuration scripts
Generates high-quality, “ as-built” storage-
solution models and detailed documentation
NetApp Confidential 42
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The Data ONTAP operating system is the Windows file server, and it replaces Windows boxes. Windows
administrators may be put in other roles because of this shift, so be mindful of the political implications ofyour sales pitch.
Multiple Windows systems ma p to one NetApp controller: It depends on a customers’ servers, how muchtraffic they have, and how powerful the servers are.
Windows servers tend to be lightweight boxes with many disks connected to them. They are not usually big,high-performance systems.
Consolidation as large as 80-to-1 has been seen in Windows environments, but as little as 5-to-1 has also been
seen.
Generally, you can size the Windows file system based on a customer’s storage need.
CIFS is generally not used in high-performance types of environments. Performance is usually less of aquestion.
In large environments, pay attention to maximum users, shares, and open files. Scale is based on system
memory, which is documented in standard Data ONTAP documentation.
Windows File Serving: Consolidation
The Data ONTAP operating system functions as the
Windows file server.
A high number of Windows servers exists per NetApp
controller. (The controller is much more scalable thanwith Windows.)
You can usually size a Windows file server simply
based on customer storage needs. This can be a
great point of entry to an account.
The maximum number of users, shares, open files,
and so on, is noted in documents. The numbers are
generally high, based on system memory.
NetApp Confidential 5
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This image represents a typical Windows file-serving environment. The computers at the top may bethousands of users on the network who access data on the servers below. You can see that each server isindependent, with its own backup systems, storage capacity, and administrative needs. In a typical file-server
environment, hundreds or even thousands of these servers may exist. The challenge here is maintaining all ofthe servers, backing them up, keeping them up-to-date, and utilizing storage assets effectively.
Here we see the same users, but now they are accessing one consolidated server that interoperates fully withthe Windows environment. From the perspective of the Windows clients and administrators, it looks like a
Windows file server that allows them to leverage existing Windows applications and tools. NetApp integrateswith Active Directory, supports Kerberos authentication and Group Policy Objects, and integrates withVolume Shadow Copy Services (VSS), which is Microsoft’s snapshot implementation. It also works withexisting software for backup, storage management, and antivirus scanning.
The benefits are many. Because customers are moving their data from slow, unreliable servers with direct-
attached storage (DAS) to a highly reliable enterprise-class file server, they get highly available storage. Theycan also consolidate hundreds of file servers with minimal impact on users and take advantage of pooledstorage for more efficient use of storage resources. With pooled storage, customers also get the ability to
expand their storage capacity without disruption for just-in-time provisioning, which also greatly increases
their storage-utilization levels. Another benefit is heterogeneous file sharing, which allows rapid, secureaccess for data sharing to both UNIX and Windows users. Because the storage systems to manage are fewerand simpler, customers get simplified data management. Finally, because NetApp is built on open systems, itseamlessly integrates with existing software and hardware.
This is a big opportunity.
A Seamless TransitionIntegration with Windows Infrastructure
Integrate into a Windows environment:
Active Directory and Group Policy support
Kerberos and Lightweight Directory Access Protocol (LDAP) support
Leveraging of existing Windows administration tools such as Microsoft Management Console (MMC)
Integration with MS Volume Shadow Copy Service (VSS)
Windows
Project Shares
CIFS
User Home
Directories
Microsoft
Active
Directory
Server
CIFS CIFS
Software Development,
CAD, and so on
A Typical Windows File-Serving
Environment Before NetApp An Efficient and Highly Available Windows
File-Serving Environment After NetApp
Windows
Project Shares
CIFS
User Home
Directories
CIFS CIFS
Software Development,
CAD, and so on
Microsoft
Active
Directory
Integration
NetApp Confidential 6
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Most environments have at least one team, group, or department that has Linux as its standard desktop
operating system. That is the classical mixed environment. That is no problem for NetApp.
Data ONTAP can provide simultaneous NFS and CIFS access to the same file system — the same files. In
these multiprotocol environments, Data ONTAP guarantees file locking across the two environments. If aWindows user has a file open for edits, that file lock is honored from the NFS side and vice versa. Manycustomers use cross-protocol file access. However, avoid the mixed security style.
There are three security styles available: This is a piece of metadata that exists on a qtree. Every qtree has tohave a security style. A system-wide default is set based on the last-licensed NAS protocol or by the
administrator. Any qtrees security style can be changed on the fly, within certain rules.
The three security styles are:
UNIX NTFS Mixed
Seeing those three names, customers assume that if they have a mixed environment, they must set the mixedsecurity style. That is not true. Any of these settings can give full, read, and write access to both CIFS and
NFS clients. The best practice is to choose the dominant security style, usually the one that needs to be able to
set the security, and use that one for the qtree security style. Users connecting from the opposite protocol aremapped to the dominant protocol.
The /etc/usermap.cfg file defines mappings.
For example, a Windows user comes in to a UNIX-style qtree. The account is mapped to a UNIX account.
The UNIX account permissions are read and applied to the Windows user. If the mapped UNIX account hasaccess, the Windows account will have access.
Mixed UNIX and Windows
File-Serving Environments
NetApp provides UNIX and Windows access to
the same file: the traditional NetApp multiprotocol:
– A likely advantage in an engineering or graphics
shop
– Guaranteed file locking (see TR3014 and TR3024)
– Linux to the desktop?
NetApp is an established leader in NFS:
– Original NetApp controllers are best-of-breed NFS
servers.
– NetApp made the first NFS server that is supported
by Oracle for databases.
NetApp Confidential 9
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
When is the mixed security style useful? Only if there is a strong business case for having both security stylesactive in the same file system. Some files will have read, write, execute bits from UNIX. Some files will haveACLs and ACEs from NTFS. A given file or directory can only have one type of security or the other. Onindividual files, it is a problem, but it is a manageable problem. Folders are a more difficult challenge, given
the intricacies of NTFS permission inheritance.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
INTEGRATION WITH SERVER AND NETWORK INFRASTRUCTURE
NetApp provides seamless integration with the environment while consolidating multiple file servers.
Seamless integration allows customers to integrate seamlessly with the authentication environment such asMicrosoft Active Directory (AD), AD LDAP, OpenLDAP, AD Kerberos, and MIT Kerberos.
Consolidation allows customers to consolidate their data that serves applications such as home directories,shared storage, custom applications, technical applications, and software development.
Many file servers are consolidated into one NetApp system, which reduces the administrative overhead andTCO.
NetApp has supported Windows 2008 AD and SMB 2.0 protocol since Data ONTAP 7.2.4 and Data ONTAP7.3.1 respectively.
Integration with Server
and Network Infrastructure
Shared
Storage
Home
Directories
Active Directory
(AD), LDAP,
or Network
Information
Service (NIS)
Software
Development
Before NetApp With NetApp
Shared
Storage
Home
Directories
Software
Development
AD,
LDAP,
or NIS
NetApp Confidential 18
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The products that are based on Snapshot technology create an advantage for NetApp. These products provide
the ability to mirror from a Snapshot copy, lock down a Snapshot copy for compliance, and so on.
SAN provides easier administration:
LUNs are not directly tied to disk; no disk management is required. Industry-leading flexibility is provided for changes.
Data provisioning is simple. LUN creation, growing, and shrinking are simple. With SnapDrive software andWindows 2008, you can shrink a disk. Windows 2003 does not provide this capability. A best practice is touse Volume Manager and add and remove LUNs (but do not shrink them).
NetApp offers LUN cloning for test environments, for reports, and for other purposes.
NetApp offers a hardened storage subsystem with integrated data protection among other features.
The majority of NetApp technologies apply to SAN environments and to NAS.
NetApp SAN Advantages (FC and IP)
Easier administration:
– LUNs that are not tied to disk; no disk management
– Industry-leading flexibility for changes
Simple data provisioning: LUN creation, growing,
and shrinking
LUN clones or FlexClone volumes for testing,
report generation, and verification
A hardened storage subsystem with integrated
data protection: RAID-DP technology, SyncMirror
software, MetroCluster, and Snapshot technology
NetApp Confidential 21
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
SnapDrive software is part of the NetApp server suite and comprises SnapDrive for Windows and SnapDrivefor UNIX.
SnapDrive software allows all storage-provisioning activities to be managed from the host
(the server administrator):
LUN creation igroup creation Mapping
Partitioning Formatting Mounting
Windows and UNIX versions:
Windows 2000 Server and Windows Server 2003
AIX, Solaris, HP-UX, Red Hat Linux, SUSE, and Oracle Enterprise Linux
SnapDrive software handles dynamic volume management. SnapDrive software provides OS-consistentSnapshot copies. This is the most important technical reason for having SnapDrive software (along with themanagement reasons). Customers get near-instantaneous restores with SnapRestore software, which ismultipathing-aware and cluster-aware, and customers get OS-consistent replication.
SnapDrive SoftwareExtending NetApp Simplicity to SANs
Do not create LUNs on the root storage system volume /vol/vol0.
For better Snapshot copy management, do not create LUNs on the same storage system volume if thoseLUNs must be connected to different hosts.
If multiple hosts share the same storage system volume, create a qtree on the volume to store all LUNs forthe same host.
SnapDrive for Windows allows administrators to shrink or grow the size of LUNS. Never expand a LUN
from the storage system; otherwise, the Windows partition does not expand properly.
Make an immediate backup after expanding the LUN so that its new size is reflected in the Snapshotcopy. Restoring a Snapshot copy that is made before the LUN is expanded shrinks the LUN to its formersize.
Do not place LUNs on the same storage system volume as other data; for example, do not place LUNs in
volumes that have CIFS or NFS data. Calculate the LUN size according to application-specific sizing guides, and calculate for Snapshot usage
if Snapshot copies are enabled.
Depending on the volume or available SnapReserve space, use the option for volume automatic grow or
automatic delete to avoid a volume-full condition that is due to poor storage sizing.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Refer to the NetApp Interoperability Matrix and check the following items:
Confirm that SnapDrive for Windows supports the environment.
For specific information about requirements, see the SnapDrive 6.2 for Windows Installation and
Administration Guide. See the FC and iSCSI Configuration Guide for Data ONTAP. Always download the latest Host Utilities from the download section of the NetApp Support site.
NetApp recommends that you perform all procedures from the system console and not from a terminalservice client.
After you complete the preceding checklist, see the steps in the SnapDrive 6.2 for Windows Installation and
Administration Guide for the details of how to install SnapDrive for Windows.
Refer to the SnapDrive 6.2 for Windows Release Notes for the latest fixes, known issues, and documentation
corrections.
SnapDrive for WindowsComponents
Interfaces:
– MMC
– SnapDrive command-line interface (CLI)
Services:
– Core NetApp SnapDrive services
– Data ONTAP Virtual Disk Services (VDS ), which interacts
with Windows VDS for disk and volume management
– Data ONTAP VSS, which interacts with Windows VSS for
Snapshot copy management
Initiators:
– iSCSI
– FC
NetApp Confidential 26
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Is a process that allows blocks that are marked free in the NTFS metadata block to be freed on the DataONTAP LUN
Does not require a new license
Provides better space utilizationGlobally unique identifier (GUID) partition table (GPT) partitions are part of the extensible firmwareinterface (EFI). This standard is phasing out BIOS, which relies on master boot record (MBR) partitions.
MBR:
Supports four primary partitions or three primary partitions and an extended partition with up to 128logical drives
Has a maximum size for a basic volume of two terabytes Contains only one copy of the partition table
GPT:
Can have 128 primary partitions
Can be up to 18 exabytes logically, but Windows file systems put a limitation of 256 terabytes Contains two copies of its partition table, has CRC32 fields for partition data-structure integrity, and on
checksum failure can recover itself from a backup copy
SnapDrive for WindowsFeatures
Space reclamation (NTFS hole punching)
Globally unique identifier (GUID) partition
table (GPT ) disk partition igroup management
Thin provisioning of LUNs
NetApp Confidential 27
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The following are the key features of SnapDrive for Windows:
Enhancement of online storage configuration, LUN expansion and shrinking, and streamlinedmanagement
Support for connections of up to 168 LUNs
Integration with Data ONTAP Snapshot technology, which creates point-in-time images of data that isstored on LUNs
SnapDrive 6.2 for Windows: Best Practices Works in conjunction with SnapMirror software to facilitate disaster recovery from either asynchronously
or synchronously mirrored destination volumes Enables SnapVault updates of qtrees to a SnapVault destination Enables management of SnapDrive software on multiple hosts
Enhances support on Microsoft cluster configurations
Simplifies iSCSI session management
Enables technology for SnapManager products
NOTE: igroup management controls igroup creation and naming within SnapDrive software.
Thin provisioning of LUNs:
Controls less than 100% of the fractional space reservations from SnapDrive software Monitors fractional reserve usage
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
SnapDrive software offers a supported way to ensure that the file systems and volumes are consistent when aSnapshot copy is created.
A customer must still ensure that the application that is running on the file system and volume is consistent
before the customer creates the Snapshot copy.
Host-driven Snapshot technology provides a one-step process for quiescing and synchronizing file-basedsystems and volumes before creating a Snapshot copy to ensure data integrity.
With SnapDrive for UNIX 3.0, the Windows and UNIX versions are similar, with the exception thatSnapDrive for Windows has a GUI.
SnapDrive for UNIXFeatures
Maps file system mountpoints to newly created or existing LUNs
Supports all major UNIX platforms including AIX, Solaris, HP-UX,
Red Hat Linux, SUSE, and Oracle Enterprise Linux
Grows file systems on demand in a nondisruptive way
Supports protocols like FC, iSCSI, and NFS
Provides host-driven Snapshot technology:
– Snapshot copy of volumes in same NetApp storage
– Snapshot copy of volumes across NetApp storage systems
Uses NetApp Manage ONTAP for secure communication with
NetApp storage systems
Supports a range of multipathing and clustering technologies
(appropriate to host)
NetApp Confidential 28
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
When you create a FlexVol volume, you must worry about the space guarantee type. Three different conceptsthat can be confusing, because they are similarly named, are:
Snapshot reserve: the space that is reserved for active Snapshot copies on a volume (20% by default; can
be adjusted)
Space guarantee: a method of guaranteeing that writable space is available for the volume Space reservation: primarily a LUN mechanism that is used to guarantee expected writable space
The focus of the next section is on the three space guarantee types for FlexVol volumes:
Volume
None File
Flexible Volume CreationSpace Guarantee Types
Flexible volume creation can be performed by
using CLI commands or System Manager.
The actual space is allocated from thecontaining aggregate’s file system’s space.
This allocation is controlled by the flexible
volume’s ―space guarantee‖ option.
Three ―space guarantee‖ types are available:
Volume None File
X
NetApp Confidential 30
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The important issue about FlexVol volumes is the ability to change them. To do that, an administrator can usea command that is similar to this one: vol size FlexVol 50n. As long as the volume is not larger than thenumber that is entered, it immediately trunks down the volume to 50 MB. It does not actually move any data
around on disk or shrink any tables; the Data ONTAP operating system adjusts an accounting number. Aslong as the space is free, the system allows the change. An administrator cannot destroy data by doing this;
the system protects the administrator from making a fatal error.
Here is an interesting situation that is related to the resizing of volumes:
A Windows administrator calls in while on a Windows system that is connected over CIFS to the NetApp box. From the Windows system, the administrator sees that a 100-Gb file system is running out of space. Theadministrator needs more to get a project done. The administrator is familiar with the storage systems, logs in,
and runs a volume size increase command to 110 Gb. When the administrator goes back to the Windows boxand looks at it, it reports that 88 Gb are available. The administrator thinks that something is wrong and dials
1-800-4NETAPP. What just happened?
This is not a bug. This is correct behavior, assuming default configurations. If Windows is reporting 100 Gbthat is usable by the file system, how large is the volume that is hosting that?
The default reserve for Snapshot copies is 20%, which means that the underlying volume is actually 125 Gb.So when the administrator issued the command to change volume size to 110 Gb, the administrator actually
shrank the volume by 15 Gb to 110 Gb. The volume still has the 20% reserve for Snapshot copies, so theadministrator reduced the size of the active file system to 88, because 88 Gb plus 20% takes the system backto 110 Gb.
This is an example of why NetApp recommends that before you do anything, run the vol size command
with no arguments. Doing so shows you the current size of your volume. Then, when you see 125 Gb as thevolume size, you will remember the reserve for Snapshot copies. This gives you the opportunity to do thecorrect math and resize the volume appropriately.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The following question comes up frequently: “Now that I have multiple volumes that are hosted inside a
single physical container, is it bad if I accidentally delete that container?” The answer is yes. For that reason,you cannot remove an aggregate until all of the FlexVol volumes are removed. For a FlexVol volume to beremoved, it must be taken offline and destroyed. Both commands require multiple responses to complete theremoval. During removal of a volume, the number of times that you must press ENTER to destroy each
FlexVol volume is four, and you then need an additional four acknowledgments to destroy an entire
aggregate. Removal of volumes and deletion of aggregates cannot be done accidentally. NetApp has a built-insafety net. Be aware, though, that after you destroy the physical container, you cannot revert to a Snapshotcopy, because all of the Snapshot data was destroyed.
If you need to resurrect FlexVol volumes that were destroyed, you must have aggregate Snapshot copies.After the FlexVol volume is destroyed, it cannot be restored on that FlexVol volume (because it wasdestroyed), but it can be restored from an aggregate Snapshot copy (if one was enabled). That also revertseverything in the aggregate (which may have multiple volumes) to get that FlexVol volume back. This may
seem like a “big hammer” way to protect yourself from making a mistake, but it is possible.
Aggregate and Flexible Volume Removal
Aggregates cannot be removed until all flexible
volumes on the aggregate are removed.
Flexible volumes and aggregates can be removed
by using CLI commands and System Manager.
– For flexible volumes, use these CLI commands:
– In Cluster Mode, you must unmount the volume
before off lining.
vol offline <FlexVol-name> and
vol destroy <FlexVol-name>
– For aggregates, use these CLI commands:
aggr offline <aggr-name> and
aggr destroy <aggr-name>
NetApp Confidential 32
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This presentation will walk through four storage scenarios:
Local direct-attached storage: This scenario provides an example of a direct-attached disk as our basis forcomparison.
SAN-attached storage backed by NetApp: Next, we compare a common SAN environment, without Snapshottechnology.
SAN-attached storage with Snapshot copies, backed by NetApp: This scenario adds WAFL Snapshot copiesand explains the potential pitfalls they can introduce.
SAN-attached storage with Snapshot copies and space reservations, backed by NetApp.
Storage Scenarios with NetApp
Local and DAS
SAN-attached storage that is backed by
NetApp technology SAN-attached storage and Snapshot copies
that are backed by NetApp technology
SAN-attached storage, Snapshot copies, and
space reservations that are backed by NetApp
technology
NetApp Confidential 34
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
In this case, the file system layer of the host OS issues an SCSI “ENOSPC” message. This is a normal
condition. The OS responds by reporting back to the user or application that no space is available, and thewrite fails. Well-written applications have no problem with this message.
DAS Scenario
Local:
Write three-block file
Write five-block file
Write four-block file?
FS ENOSPC = Normal Condition
NetApp Confidential 36
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
In this case, it is the WAFL file system that issues the ENOSPC message in response to the host OS file
system’s request for four blocks. The host FS layer does not expect to be denied access to blocks that itknows, by its own accounting, should be available. The FS layer assumes a hard error has occurred on theunderlying disk and immediately disconnects the presumed-failed disk. For applications that access the disk,this is a catastrophic failure with no guarantee that the data is left in a consistent state.
The Problem: SAN and Snapshot Copies
Backed by NetApp Snapshot technology
Write three-block file
Create a Snapshotcopy
Write five-block file
WAFL (Write Anywhere File Layout File System)
ENOSPC = Disk Failure
Delete three-block file
Write four-block file?
NetApp Confidential 39
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The most important issues to consider when you plan SAN implementations are the qualification matrices. In
the matrix are entries for:
Supported protocols
Notes about firmware, specific HBAs, and other hardware-specific data Which versions of the Host Utilities are supported
Which host OS versions are supported Which physical platforms Whether a software initiator is available and if NetApp supports it
Any versioning restrictions Driver availability
What volume managers are available and which ones are supported by NetApp
What multipathing software is available Which file systems are supported by NetApp
The Data ONTAP version Details on clustering
SnapDrive versions
A line entry exists for each possible supported configuration. The matrix is about 230 pages long, and it
grows constantly. Unlike NAS, SAN requires qualification of everything that is mentioned above and more. Ifyou do not see a combination that you need, ask for it. As NetApp works to grow SAN presence, thesustaining NetApp engineering department works hard to get you support for whatever combination you need
as quickly as it can. This may take a few weeks but generally does not significantly increase the sales cycle.
FC SAN and iSCSI Qualification Matrices
Matrices:
– Are kept up-to-date
– Are available to customers and channel partners
Unlike NAS, SAN (including iSCSI for now) requires
qualification for hosts, switches, and (FC SAN) HBAs.
If you do not see a combination that you need, ask for it.
Visit the Support Site for FC and iSCSI deployments:
http://support.netapp.com/
NetApp Confidential 44
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
When you talk about the core four, you must talk about which protocol you want in a given environment.
Thankfully, most customers have this well established by the time that people come in to service accounts. FCSANs usually provide the highest performance option, of course, yet performance is not always the topcriterion, especially if the customer does not have FC infrastructure. It is expensive to create infrastructure ifit does not exist. Certainly SAN has the advantage of being totally application-independent. It looks like a
hard drive. Anything that can run on anything can run on SAN.
NAS is the most independent protocol. Because it has been around as a standard for so much longer thanCIFS, better support is usually available in the NFS world.
You must watch to ensure that an application will be supported if NetApp moves off of DAS. For a Windowsserver that runs an application, use SAN. Some, but not many, circumstances exist in which you can use CIFS
by Windows. Microsoft requires the use of SAN for most applications.
NAS can be easier to administer, because only one file-system layer exists, and that layer is NetApp. Using NAS makes it easier to manage Snapshot copies, and you do not have to worry about space reservations.
Be a trusted advisor, but let customers make their own decisions.
Usually customers have reasons for their choices that are defined and in place.
Frequently, these are not technical reasons.Be aware that some of their reasons may be political.
Which Protocol?
FC SAN is usually the highest-performance option;
however, performance is not always the top criterion.
SAN is application-independent.
NAS is mostly OS-independent: Standard protocols
are in UNIX, Linux, and Windows.
If a Windows server runs the application, use SAN.
Because no space reservation for Snapshot copies
exists, NAS can be easier to administer.
Be a trusted advisor, but let the customer decide.
NetApp Confidential 45
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The Snapshot reserve specifies a set percentage of disk space for Snapshot copies. By default, the Snapshot
reserve is 20% of disk space. The Snapshot reserve can be used only by Snapshot copies, not by the active filesystem. This means that if the active file system runs out of disk space, any disk space that remains in theSnapshot reserve is not available for active file system use.
NOTE: Although the active file system cannot consume disk space that is reserved for Snapshot copies,Snapshot copies can exceed the Snapshot reserve and consume disk space that is normally available to theactive file system.
The Snapshot reserve is not a reservation of physical disk; it is an amount of space to be counted against
Snapshot copies.
Snapshot Reserve
Snapshot reserve defines a percentage of the
volume that is reserved for Snapshot copies:
Set at the volume levelnetapp> snap reserve
Volume vol_SAN1:
current snapshot
reserve is 20% or
2097152 k-bytes.
Historically set to zero for volumes that are
used with SAN environmentsNOTE: Although the active file system cannot consume disk space that is reserved for Snapshot copies, Snapshot copies can exceed
the Snapshot reserve and consume disk space that is normally available to the active file system.
Volume 1 Space Reservation
Snapshot Reserve
NetApp Confidential 52
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This value, when changed from the defaults, is not persistent; it reverts to the default values after booting. So
to change this value (for example, 90% for tiny volumes of less than 20 G) and make it persist after booting,you should add the following line to each /etc/rc file on both controllers:
priv set –q diag;
setflag wafl_reclaim_threshold_t 90;
priv set;
Volume Autosize (1 of 2)
To grow the volume:
vol autosize determines if a volume should grow
when nearly full.
Both snapshot autodelete and vol autosize
use the value wafl_reclaim_threshold:
– Data ONTAP 7.1 to Data ONTAP 7.2.3: 98%
– Data ONTAP 7.2.4 and later versions (threshold
depends on volume size):
Variable NameVolume Size Valuewafl_reclaim_threshold_t: Tiny volumes< 20 G Threshold= 85%
wafl_reclaim_threshold_s: Small volumes from 20 G to < 100 G Threshold= 90%
wafl_reclaim_threshold_m: Medium volumes from 100 G to < 500 G Threshold= 92%
wafl_reclaim_threshold_l: Large volumes from 500 G to < 1 T Threshold= 95%
wafl_reclaim_threshold_xl: Extra large volumes from 1 T up Threshold = 98%
NetApp Confidential 54
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Volume autosize can be run only a maximum of 10 times on any particular volume. If you set the incremental
size too small, you cannot expand it as much as you may want to. For that reason, it is generallyrecommended that you use the -m and -i switch when configuring the volume autosize feature to set theincremental size and the maximum size to something larger than the defaults.
NOTE: The volume can grow only to a maximum size that is 10 times the original volume size.
Volume Autosize (2 of 2)
Configuration:
Is set at the volume level
Can use these values: – ON:
Increment size (default 5% of original size)
Maximum size (default 120% of original size)
– OFF:
vol autosize vol_name [-m
size[k|m|g|t]]
[-i size[k|m|g|t]] [on|off|reset]
NetApp Confidential 55
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Configurations can get complex. If you have doubts as to the recommended best practice of reservations,
consult this guide: “Technical Report: Thin Provisioning in a NetApp SAN or IP SAN EnterpriseEnvironment” at http://media.netapp.com/documents/tr-3483.pdf .
NetApp systems work well with Microsoft Exchange, so well that some NetApp Software Engineers consider
Exchange environments to be the easiest sell for NetApp products. When you demonstrate SnapManagersoftware for Exchange to an administrator, that administrator becomes eager to see more and to put thesoftware into an environment. This is a good NetApp solution.
Exchange 2010 is now out and starting to be implemented widely in the customer world. NetApp nowsupports Exchange 2010. SnapManager 6.0 for Exchange is available.
Why use NetApp hardware and software solutions for Exchange?
Snapshot technology
Flexible provisioning Aggregates
Spreading of data across many spindles to get optimized performance Good I/O per second performance Excellent FC options
Clustering Multipath network I/O (MPIO)
Windows integration
Why Use NetApp Systems
for Applications?
A few of the reasons to use NetApp systems are:
Snapshot copies
Data and Snapshot management and replication
Flexibility and ease of use
Dynamic provisioning
Performance
iSCSI solutions that are provided by a market leader
Cost-effective FC solutions that are gaining market
recognition
Excellent high-end FC, clustering, and network
multipath I/O (MPIO) options
NetApp Confidential 5
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
In this example, when you use flexible volumes on an aggregate, you share all of those I/Os per second from
all of those disks. If you suddenly get much hot traffic on this server, it does not matter, because the volumeor LUN spreads it out and equalizes it across all of those disks in the aggregate.
Much best-practice information is available for setting up Exchange environments. Many Exchangeenvironments keep their data and logs on the same aggregate. Some environments keep data on one aggregateand logs on another. It depends on the environment and its traffic profile. NetApp has technical reports thatdiscuss best-practice configurations.
Because so many disk I/Os per second are required, large aggregates with flexible volumes striped across
them are always a big win for Exchange environments.
Flexible Volumes (Exchange Example)
An aggregate with flexible volumes:
Total disks are available to all
flexible volumes.
Volumes are logical and flexible,
not constrained by hardware.
Volumes can be sized as
needed.
Volumes are easy to
manage with maximum I/O
performance.
A Data ONTAP 7G aggregate pool
of physical disks, flexible volumes,
and increased aggregate disk I/O
bandwidth
Logs
LUN
Logs
LUN
Logs
LUN
Logs
LUN
Data
LUN
Data
LUN
Data
LUN
Data
LUNLUNs with host data
Volumes and
data management
NetApp Confidential 6
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The native tools back up only databases and search indexes. The administrator must manually back up front-
end files. Microsoft recommends that users keep images of the Web servers. The native tools require highrestore time and provide low availability during the restore process. Also, no out-of-the box schedulingmechanism exists. You must use the command line with Windows Task Scheduler to schedule backups.
The bottom line is that customers need a third-party data-protection solution.
Why Not Use Native Management Tools?
To back up:
– No scheduling is available; you must manually start the
backup.
– The process is resource-intensive; Microsoft does notrecommend that you run it during production.
– Granularity is poor; it is limited to the site level.
To restore one file:
– You must first restore the entire database onto a
nonproduction server.
– You must then manually copy a single file onto a
production server.
– You cannot prevent the loss of important metadata,
histories, and security settings that are associated with thefile.
NetApp Confidential 8
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NetApp offers specialized software for Exchange environments:
SnapManager software SnapDrive software: runs in both Ethernet and FC environments
Single Mailbox Recovery (SMBR) software Operations Manager: provides a central management console for NetApp systems
SnapManager software is the primary piece of software that everyone thinks about in an Exchangeenvironment. It facilitates rapid online backups and restores. It integrates directly with the Exchange API and
performs Esefile verification. This course discusses that later, but that is an important piece, as is automated
log replay. SnapManager software also provides a nice UI and wizards for configuration, backup, and restore.
SnapManager software depends on SnapDrive software. Because it is a SAN environment, SnapDrive
software is required on the back end to manage OS-consistent Snapshot copies and the file systemsthemselves.
SMBR is a good tool for pulling out a single message, an entire mailbox, a folder, or whatever you need to pull out of a backup and then restoring it to a live Exchange server or to a separate .pst file.
NetApp Software for Exchange
SnapManager software:
– Provides rapid online backups and restores by integrating with the Exchange
backup API, running Esefile verification, and automating log replay
– Includes an intuitive UI and wizards for configuration, backup, and restoration
SnapDrive software:
– Provides dynamic disk and volume expansion
– Supports Ethernet and FC environments
– Supports Microsoft Cluster Services (MSCS) and NetApp controller failover
(CFO) for high availability
– Is required for Windows SnapManager products and included with UNIX
SnapManager products
Single Mailbox Recovery (SMBR) software restores a single message,
mailbox, or folder from a Snapshot backup to a live Exchange server or
.pst file (An optional feature).
NetApp Confidential 12
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
With NetApp SMBR software, you can provide better service, reduce infrastructure expenses, and improve
productivity for Exchange administrators. NetApp SnapManager for Exchange, when combined with NetAppSMBR software, enables you to create near-instantaneous online backups of Exchange databases and to verifythat the backups are consistent so that you can rapidly recover Exchange data at any level of granularity:storage group, database, folder, single mailbox, or single message.
Single mailbox restore is from PowerControls software. Many other products provide it, but when combinedwith NetApp Snapshot technology, it becomes more powerful.
Single mailbox restore makes the process of restoring items from a mailbox a simple help-desk function
rather than an IT operation such as pulling and restoring tapes. This tool is effective and efficient, especiallyin versions of Exchange earlier than Exchange 2007.
Single Mailbox Restore (Exchange)
Use PowerControls software.
Quickly access Exchange data that is stored in online
Snapshot backups.
Select any data, down to a single message.
Restore the data to one of two locations:
– An offline mail file:
The file is in personal storage file (.pst) format.
Open the file in Microsoft Outlook.
– The user’s mailbox:
Connect to a live Exchange server.
Copy data directly to the user’s mailbox.
Data is instantly available.
NetApp Confidential 13
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Because of the I/O load on an Exchange system, NetApp products may not help to increase the number of
users that can be sustained by one system. Another aspect that a software engineer should be aware of is thatif a customer uses iSCSI, the customer needs more CPU overhead to go with software initiators. Thisoverhead can range from 10% to 15%, depending on the system load. The customer may want to go with ahardware initiator for easier scaling.
Exchange Server Performance
The server needs megacycles for networking,
user activity, and database verifications,
among others.
The iSCSI software initiator requires more
CPUs.
FC and iSCSI hardware initiators scale more
by 10% to 15%.
NetApp Confidential 14
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Storage resiliency: provided by separate storage systems for the active and passive nodes. In addition,
these storage systems can be clustered. Space efficiency: provided by the NetApp deduplication feature that is run against the NetApp Exchange
volumes on the storage
Advantages of having SME:
All the advantages of the previous scenario remain.
You also now drive down space consumption at the passive copy. This further reduces the need foradditional storage space.
A Data Availability Group (DAG) is a set of up to 16 Microsoft Exchange Server 2010 Mailbox servers that provide automatic database-level recovery from a database, server, or network failure. Mailbox servers in aDAG monitor each other for failures. When a Mailbox server is added to a DAG, it works with the other
servers in the DAG to provide automatic, database-level recovery from database, server, and networkfailures.
Data Resiliency and Efficiency
Site A
SnapManager
Exchange and SMBR
Data Availability Group ( DAG)
Client Access Server
9:00 AM
9:15 AM
9:30 AM
Replica
Database B
NetApp
Deduplication
Active
Database A
Backup-1
Backup-2
Backup-3
NetApp Confidential 15
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
In addition to Microsoft for Exchange, NetApp has partnerships with Oracle, IBM for DB2, Microsoft SQL
Server, Sybase, and SAP. Because SAP always runs on top of another database, SAP is included here.
NetApp SnapManager for SAP is currently only for SAP running on Oracle, which currently is only on
Solaris. SnapManager software for each of these products provides a similar suite of functionality as previously described: provisioning the storage, working with flexible volumes, and using Snapshot copies,SnapMirror relationships, and the SnapRestore feature.
NetApp Database and
Application Solutions
NetApp has partnerships, solution sets, and
resources for the following:
Oracle (database and applications) IBM DB2
SQL Server
Sybase
SAP
NetApp Confidential 19
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
DATABASE AND DATABASE OBJECT CREATION AND MODIFICATION
The next pain point is database and data object creation and modification: in other words, creation of
duplicates of databases. This process is time-consuming and difficult and uses system resources.
NetApp FlexClone software is an inexpensive way of making copies of a database for testing, quality
assurance (QA), and development. This feature is important in database environments and is probably whereFlexClone software is the most obvious fit, although it has many uses outside of the application world.
NetApp Solution
An easy, space-efficient, and
relatively inexpensive way to
make copies of a database for
testing, quality assurance (QA),and development
FlexClone software, the key
feature that facilitates the solution
Pain Point
Pain Point
Creating duplicates of databases
is a time-consuming and difficult
process and uses valuable storage
resources.
Database and Database
Object Creation and Modification
NetApp Confidential 20
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Here are all of the blocks on disk. At this point, you make a Snapshot copy of these blocks. No space is used
at this time. The Snapshot copy is a read-only copy.
The next step is to create a cloned volume based on the Snapshot copy. The clone ties to the same blocks of
active data as the Snapshot copy and uses them as its base.
As changes are made to the data, the changes are tracked separately. The changes to the clone do not affect
the original volume, and changes to the original volume do not affect the clone. The advantage is that spacerequirements do not double with the clone. Because it shares blocks with the original volume, only changeddata takes up additional space.
This makes the clone space-efficient and near-instantaneous to create, because no data movement occurs, onlyreplication of pointers in the metadata to the original data blocks.
Much less space is used, and much less time is spent creating the clone. Given a 2-TB database, making a physical copy takes hours. With FlexClone software, the moment that the command is typed, the clonedvolume is available and ready to use. DBAs love cloning. Typically, you take the clones from the mirror tooffload the additional I/O from production spindles, but this is not a requirement.
In the case of a database failure, using FlexClone software, an administrator can perform the restore, get
production up and running, and take a clone off of it, prior to the restore, to run tests and scenarios to
determine what happened.
If the administrator makes changes and realizes that the copy must be independent, the administrator can use a
clone-splitting command. At that point, in the background, the controller copies all of those blocks so thatthey are separate blocks on disk that exist completely independently of each other as separate volumes.
Volume Cloning: How It Works
Cloned
Volume
1. Start with a volume.
2. Create a Snapshot copy.
3. Create a clone (a new
volume based on the
Snapshot copy).
4. Modify the original volume.
5. Modify the cloned volume.
Result:
Independent volume copies
that are efficiently stored
Volume 1
Snapshot
Copy of
Volume 1
Data Written
to Disk
Snapshot Copy
Cloned Volume
Changed Blocks
Volume 1
Changed Blocks
NetApp Confidential 21
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
If you clone 10 production systems, each of 500-GB Oracle databases, expect to need at least one clone per
week (for example, for patching, schema and database extension testing, and database upgrades).
Traditional time per database clone is approximately 14 hours or 1.25 hours on a 1-Gbps network or a100-
Gbps network respectively.
NetApp time per database clone is less than one minute.
Now examine the 1-GBps network times from the example:
Total time = 10 systems x 1.4 hours per clone x 52 weeks per year = 728 hours per year Total time for NetApp = 10 systems x 1/60 of an hour per clone x 52 weeks per year = approximately 9
hours per year
NetApp Approach1. Select the source clone.
2. Select the target system.
3. Click the mouse a few times to submit
selections.
Traditional Approach
Traditional Approach1. Prepare target system volumes for the
database files and database file system.
2. Shut down the source database or put the
database into online backup mode.
3. Copy the data file to the target system
volumes.
4. Repeat steps 2 and 3 for each data file.
5. Restart the source database.
6. Copy the database file system to the target
system.
7. Configure the target system database
server.
8. Restart the new target database server.
9. Roll forward redo logs if required.
Cloning for Testing and Development
9 hours728 hours
Total time every year for cloning 10 production systems
Storage
Admin-
istrator
DBA Server
Admin-
istrator
NetApp Confidential 22
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Server virtualization allows dramatic levels of server consolidation, often in the range of 10:1. This gets over
the old “silo” design of one application to one server.
However, a storage failure in a virtualized server can take down 10 applications, not just one. This leads to a
need for more reliable storage.
A dual-disk failure (or more commonly, a failure with a media error on rebuild) means that data sets of 10
applications must be reloaded, not just one. This means that a company needs something better than RAID 5.
With 10 times more data on a server, a company may not be able to make its backup windows, so it needsfaster backup.
In addition, with IT operations that are more and more critical, disaster recovery continues to increase in priority. Disaster recovery is difficult in a direct-attached storage (DAS) environment but becomes practical
with virtualized servers and storage-based disaster recovery.
While server virtualization enhances server provisioning greatly, the result is fast server and slow storage provisioning, unless other means of storage provisioning are integrated.
Virtualization Increases Storage Demands
After
Virtualizing
Servers
Before
Virtualizing
Servers*
The number of applications per server
The number of physical servers
The number of down applications on storage failure
The amount of lost data on dual-disk failure
The backup data volume
The possibility of meeting the backup window
Disaster recovery
Provisioning
1
More than 10
1
1x
1x
Feasible
Costly and complex
Slow and complex
More than 10
1
More than 10
10x
10x
Maybe not
More complex
Storage ≠ servers
* Typical configuration: DAS, RAID 5, and tape backup
NetApp Confidential 26
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
FLASH CACHE USE CASE: AN OPPORTUNITY FOR DEDUPLICATION
VMware provides a great opportunity for deduplication and NetApp. VMware stores redundant data in each
virtual machine (VM) such as the OS, patches, and software applications that are common to every virtualserver (Vserver).
NetApp can reduce redundant data to a single instance with deduplication. This can save as much as 90% ofspace, which significantly reduces storage costs.
This capability is unique to NetApp and is a strong selling point against the competition. Only NetApp can perform deduplication on primary data.
Flash Cache Use Case:
an Opportunity for Deduplication Clones consume storage that is equal to the size of the template.
Clones are 100% identical: OS software, patches, software drivers, and application data.
To deduplicate virtual machine (VM) blocks, use Flash Cache to help to accelerate
concurrent data access
ESX Server
Data Store A
Traditional Enterprise RAID Arrays
RAID Layer
O
S
O
S
O
S
O
S
A
P
P
A
P
P
A
P
P
A
P
P
.VMDK .VMDK .VMDK .VMDK
NetApp FAS System
FlexVol Technology
Duplicate Data
Is EliminatedO
S
A
p
p
.vmdk
Acceleration
O
S
A
p
p
.vmdk*O
S
A
p
p
.vmdkO
S
A
p
p
.vmdkO
S
A
p
p
.vmdk
*.vmdk = Virtual Machine Disk
NetApp Confidential 27
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Solve how NetApp extends continuous server availability to storage.
Microsoft Live Migration enables VMs to be moved from one physical server to another without disruptingapplications for purposes of workload balancing, resource optimization, maintenance, upgrades, and so on.
NetApp Data Motion complements Microsoft Live Migration.
The benefits of Data Motion are:
No planned downtime for:
– Storage-capacity expansion
– Scheduled maintenance outages
– Technology refresh
– Software upgrades
Improved SLA flexibility
– Dynamic load balancing
– Adjustable storage tiers
Application transparency
– Performance
– Transaction integrity
Always-On Server and Data Mobility
Microsoft® Live Migration
– Non-disruptive migration of
VMs across physical
machines
– Storage vendor independent
NetApp Data Motion™
– Migration of data stores
across NetApp storage
systems
Storage array balancing
Technology refresh
Capacity management
Moves hundreds to
thousands of data stores in a
single operation
Data
Data
Data
Data
Storage PoolStorage Pool
H-V H-V
NetApp Confidential 29
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Virtual desktops are rapidly being adopted by organizations because of the potential improvements to desktop
computing.
Key Points:
Virtual desktops can simplify desktop management. For example, VDI reduces desktop images that must bemanaged and maintained by eliminating the need for a different image for each desktop model. VDI evenallows employees to use their own PCs, which enables IT to offload PC hardware support.
VDI promises lower costs, especially reduced administrative requirements, and PC refresh costs. Note thatwhile costs can be reduced, it is generally the TCO and not the up-front costs, because of the investment in
data-center infrastructure.
An early driver for VDI adoption was the ability to reduce data loss by moving data from the desktop to the
data center, where backup and disaster recovery can be applied more consistently.
The transfer of data from the desktop also improves data security. Access to data is controlled centrally, andthis avoids security exposure if a PC is stolen.
Finally, a major reason why companies adopt virtual desktops is to streamline the migration to Windows 7 byrolling Windows 7 out centrally and by virtualizing applications that don’t run within Windows 7.
The Promise of Virtual Desktops
Simplify desktop management:
– Reduce the need for intensive technical support.
– Reduce the number of PC images.
Lower costs by addressing staffing costs and
data-recovery costs.
Reduce data loss by making backup less
challenging and therefore more likely.
Improve security and compliance:
– Control data portability.
– Centralize continuous security upgrades and
patches.
NetApp Confidential 32
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Virtual desktops are rapidly being adopted by organizations because of the potential improvements to desktop
computing.
Key Points:
Virtual desktops can simplify desktop management. For example, VDI reduces desktop images that must bemanaged and maintained by eliminating the need for a different image for each desktop model. VDI evenallows employees to use their own PCs, which enables IT to offload PC hardware support.
VDI promises lower costs, especially reduced administrative requirements, and PC refresh costs. Note thatwhile costs can be reduced, it is generally the TCO and not the up-front costs, because of the investment in
data-center infrastructure.
An early driver for VDI adoption was the ability to reduce data loss by moving data from the desktop to the
data center, where backup and disaster recovery can be applied more consistently.
The transfer of data from the desktop also improves data security. Access to data is controlled centrally, andthis avoids security exposure if a PC is stolen.
Finally, a major reason why companies adopt virtual desktops is to streamline the migration to Windows 7 byrolling Windows 7 out centrally and by virtualizing applications that don’t run within Windows 7.
The Promise of Virtual Desktops
Simplify desktop management
Lower costs by addressing staffing costs and
data-recovery costs Reduce data loss
Improve security and compliance
Streamline OS upgrades such as to
Windows 7
NetApp Confidential 34
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The FlexPod solution is the best-of-breed infrastructure foundation that supports virtualized and
nonvirtualized workloads that use Cisco UCS, Nexus (servers and network), and NetApp FAS (storagesystems). This is the best-of-breed unified compute, unified network, and unified storage.
NetApp will soon introduce the FlexPod for VMware solution, the first FlexPod solution to be launched.
The FlexPod solution is built around three key capabilities:
Lower risk with a validated, simplified data-center solution and a cooperative support model for a safeand proven journey to virtualization and toward the cloud
Enabled business agility with flexible IT that scales out and up to fit multiple use cases and environments
such as SAP, Exchange 2010, SQL, VDI, and secure multi-tenancy (SMT) Reduced TCO with higher data-center efficiency, decreased number of operational processes, reduced
energy consumption, and maximized resources
Introducing the FlexPod Solution
Benefits
Low-risk standardized shared infrastructure
supporting a wide range of environments
Highest possible data center efficiency
IT flexibility, providing business agility:scale out or up, but manage resource pools
Features
Complete data center in a single rack
Performance-matched stack
Step-by-step deployment guides
Solutions guide for multiple environments
Multiple classes of computing and storage
supported in a single FlexPod
Centralized management: NetApp
OnCommand and Cisco® UCS Manager
Cisco UCS B-Series
Cisco UCS Manager
Cisco Nexus Family
Switches
NetApp FAS
10 GE and FCoE
Complete Bundle
Shared Infrastructure for a Wide Range
of Environments and Applications
NetApp Confidential 43
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
SECURE MULTI-TENANCY THAT IS BUILT ON FLEXPOD FOR VMWARE
The Enhanced SMT deployment guide will be built on FlexPod for VMware infrastructure.
Layering on top of the FlexPod solution allows full-blown cloud solutions such as secure multi-tenancy to be built. (See the recently released Enhanced Secure Multi-Tenancy CVD.)
Secure Multi-Tenancy That Is Built on
FlexPod for VMwareLayer on or enable software:
NetApp MultiStore and the FlexShare tool
VMware vShield zones and applications
VMware vSphere Enterprise Plus
Security hardening
The Cisco Nexus 1000V series
Cisco SAFE architecture
Enable capabilities:
Multi-tenancy and secure separation
Service availability and disaster recovery
Service management
Service assurance
Workload isolation and mobility
The Enhanced Secure
Muli-Tenancy (SMT)
Cisco-Validated Design,
Released October 2010
NetApp Confidential 45
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
To help customers achieve the storage efficiency that they require, the newest release of OnCommandmanagement software groups multiple products into one family and unifies multiple capabilities into one
product.
OnCommand produces are designed to make NetApp storage the best choice for physical, virtual, and cloud
environments.Control NetApp storage with System Manager and My AutoSupport.
System Manager provides simple, workflow-based wizards that automate device-management tasks.Administrators can quickly set up and efficiently manage NetApp SAN and NAS systems.
Automate NetApp storage infrastructures via OnCommand unified manager and SnapManager software.
OnCommand unified manager integrates the functions of Provisioning Manager, Protection Manager, andOperations Manager into one user interface. Through one view, customers can monitor their sharedstorage environment and drill down to define storage-service levels and policy-based workflows.
SnapManager software provides the ability to connect to and manage from various platforms, includingfrom virtualized platforms.
Analyze shared IT infrastructures via the OnCommand Insight products.
OnCommand Insight products provide visibility and optimization across heterogeneous storageinfrastructures. The products that were formerly known as SANscreen and Akorri BalancePoint have been
integrated into OnCommand Insight. With OnCommand Insight, customers can optimize performance, plan capacity requirements, and ensure that they are meeting their service-level needs.Insight (SANscreen) - Assure, Plan and Protect
OnCommand ProductsService Automation and Analytics
Device management
Problem detection
Monitoring and reporting
Service automation
Policy-based workflows
Service catalog for SLAs
Capacity planning
Service management
Performance analytics
Multivendor, multiprotocol
System Manager
Simple storage device management
OnCommand
Service Automation
OnCommand Unified Manager
Virtual Storage Console
OnCommand Insight
Service Analytics
OnCommand™ Insight Balance
OnCommand™ Insight Assure
OnCommand™ Insight Perform
OnCommand™ Insight Plan)
NetApp Confidential 12
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NetApp OnCommand products enable IT storage teams to unify the operation, provisioning, and protection oftheir organization’s data and deliver efficiency savings.
Key benefits that enable the savings:
Simple. A unified approach and one set of tools enables management of physical worlds, virtual worlds,
and service-delivery systems. Therefore, NetApp storage is the most effective storage for the virtualizeddata center.
Efficient. Automation and analytics capabilities deliver on storage and service efficiency, reducing IT
capex and opex spend by up to 50%. Flexible. Tools provide visibility and insight into complex, multiprotocol, multivendor environments and
provide open APIs that enable integration with third-party orchestration frameworks and hypervisors.
Therefore, OnCommand products provide a flexible solution that enables rapid response to changingdemands.
OnCommand Management ProductsSimplicity, Efficiency, and Flexibility
Simple
Single unified approach Physical and virtual service
Efficient
Automation and analytics
Storage efficiency
Service efficiency
Flexible
Visibility and insight
Open API that integrates with third-party
management products and hypervisors
Provide effective
storage for thevirtualized data
center
Reduce IT
spend up to 50%
Rapidly respond
to changing
demands
OnCommand management software delivers efficiency savings by unifying storage
operations, provisioning, and protection for both physical and virtual resources
NetApp Confidential 13
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
OnCommand management software is the fifth generation of NetApp storage-resource management products.
To improve administrative efficiency, OnCommand products integrate numerous, previously separatecapabilities. These capabilities were previously identified as Provisioning Manager, Protection Manager,
Operations Manager, SnapManager for Virtual Infrastructures (VMWare) and SnapManager for Hyper-V
(Microsoft).OnCommand software provides a unified platform. The unified platform enables creation and extension of
policies that can be specific to servers, VMs, and applications. It centralizes provisioning, cloning, backup-
and-recovery, and disaster-recovery policies and provides security features such as role-based access control(RBAC) and delegated manageability.
OnCommand software enables management across workloads for snapshot naming, backup-type and
retention-period specification, prescripting and postscripting, and policy extension. It integrates the back-endinto one configuration repository for reporting, event, and audit logs and provides one dashboard from whichstorage resources can be viewed and interface options can be selected.
OnCommand software is included with the purchase of NetApp storage hardware.
OnCommandIntegrated Storage Management and Automation
Uniform management across
workloads
Snapshot naming, backup-type and
retention-period specification, pre-
scripting and post-scripting, and policyextensions
One configuration repository
for reporting, event, and audit
logs
Unified view, interface choice
Integrated offering
– Capabilities provided by
multiple, earlier product
Unified, extensible policy
infrastructure
– Server, VM, and application
aware
– Provisioning, cloning,,
backup/recovery and DR
policies
– Infrastructure-wide RBAC with
delegated management
– Extensible to other applications
NetApp Confidential 14
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
OnCommand 5.0 has been packaged in to the central and host services based on physical or virtual
management capabilities.
The central services are comprised of the core manageability software, pertaining to the tools related to
physical storage.
The Host package encompasses the host plug-ins based on the type of virtual infrastructure supported.
For example, the host package would install the services to monitor and manage virtual infrastructure (VIM).When you install host services in a VMware environment, then OnCommand 5.0 host plug-ins for V-centerserver is also automatically installed.
OnCommand Components
Two packages:
Core services – physical
storage manageability Host services,
virtualization plug-ins
Core
Host
NetApp Confidential 16
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The architecture diagram identifies the basic components of the OnCommand core and host packages.
The color-coding distinguishes the core components (orange) from the host components (green).
Solid boxes identify front-end GUIs that users interact with directly, and the dashed boxes identify back-end
servers or services that are not directly visible to the user.
The OnCommand console serves as the GUI from which Hyper-V objects are managed and, alternatively, as
the GUI from which VMware objects are managed. The OnCommand console launches Operations Managerconsole and NetApp Management Console, from which the physical environment is managed.
DataFabric Manager server can be installed in the standard edition or the express edition.
OnCommand host services caches schedules, catalogs, and events for short periods and enables executionwithout DataFabric Manager server.
The plug-ins for Hyper-V and VMware are collections of primitives that enable connection into Hyper-V andVMware environments.
SnapDrive for Windows software is used only within the Hyper-V environment. It is used for storage
discovery and to manage LUNs and Snapshot copies.
The vSphere Client GUI is native VMware software that is used by the VMware administrator for virtualenvironment administration. OnCommand software provides the GUI with access to the storage environment.
NetApp Confidential 17
Storage
System
OnCommand
Console (GUI)
NetApp
Management
Console (GUI)
Operations
Manager
Console (GUI) Hyper-V
Plug-ins
VMware
Plug-ins
SnapDrive
for Windows
OnCommand
Host Services
APIs
DataFabric
Manager Server
vSphere
Client GUI
Storage
System
Back-end server or service
Front-end GUI
OnCommand Architecture
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
OnCommand software provides a unified dashboard that identifies all storage resources (for at-a-glance status
and metrics) and provides various interface choices.
OnCommand software continuously monitors and analyzes the health of the environment and provides
visibility across the environment. It identifies what is deployed and displays utilization information, enablingcustomers to improve their storage-capacity utilization and increase the productivity and efficiency of their ITadministrators.
The dashboard’s panels contain information about the system and provide cumulative information aboutvarious aspects of the environment:
Availability: information about the storage controllers and vFiler units that are discovered and monitored by OnCommand (for example, the number of controllers and units that are down).
Events: status of the storage and server objects. The top five events (ranked by degree of severity) arelisted.
Full Soon Storage: identification of aggregates and volumes that are near capacity (based on the number
of days before capacity will be reached). Fastest Growing Storage: identification of aggregates and volumes for which space usage is increasing
rapidly and information about growth rate and trend for specific aggregates and volumes. Dataset Overall Status: status of the environment.
Resource Pools: identification of the resource pools that, given current usage levels, may experiencespace shortages.
External Relationship Lags: information about the relative percentages of external SnapVault, qtree
SnapMirror, and volume SnapMirror relationships (with lag times in error, warning, and normal status) Unprotected Data: number of unprotected storage and server objects that are being monitored
In addition, views are available through virtualization platforms that are based on the SnapManager self-
service customer portals. The portals are available through the Service Catalog capability and or theintegrated partner frameworks.
OnCommand User Interface Choices
NetApp Confidential 18
vCenter MicrosoftSystemCenter
Customer Portal Partner Portal
OnCommand Dashboard
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Using the OnCommand dashboard to review information provides visibility across your storage environment
by continuously monitoring and analyzing its health. You get a view of what is deployed and how it is beingutilized, enabling you to improve your storage capacity utilization and increase the productivity and efficiencyof your IT administrators. And this unified dashboard gives at-a-glance status and metrics – far more efficientthan having to use multiple resource management tools. This web-based interface uses a common web
framework called NWF.
Dashboard is a user interface window containing information panels providing information about the system. NetApp OnCommand has various dashboard panels to provide cumulative information about various aspects
of your environment.
Availability dashboard panel provides information about the storage controllers and vFiler units that are
discovered and monitored by OnCommand. You can also view the number of controllers and vFiler units thatare in down state.
Events dashboard panel provides information about the status of the storage and server objects by listing the
top five events based on their severity.
Full Soon Storage dashboard panel displays aggregates and volumes that are reaching their capacity. The
information displayed in this panel is based on the number of days in which this threshold will be breached.(at
the rate how many days it will take to full)Fastest Growing Storage dashboard panel displays aggregates and volumes for which space usage is
increasing rapidly. It also displays the growth rate, trend, and for a specific aggregate or volume.
Dataset Overall Status dashboard panel displays the overall status. number of datasets in overall error status,
overall warning status, or overall normal status.
Resource Pools dashboard panel displays the resource pools which may face potential space shortages basedon the current usage levels
OnCommand Dashboard in Detail
NetApp Confidential 19
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
External Relationship Lags dashboard panel displays the relative percentages of external SnapVault, QtreeSnapMirror, and volume SnapMirror relationships with lag times in error, warning and normal status
Unprotected Data dashboard panel displays the number of unprotected storage and server objects that are being monitored
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
OnCommand simplifies and standardizes storage operations. Standardized configuration accelerates
deployment and mitigates operational risks. OnCommand software delivers storage management features thatenable business policy compliance. Compliance is enabled – achieved by using enterprise-wide configurationmanagement, distributed policy setting, and customized reporting.
OnCommand is intuitive and helps improve the productivity of storage administrators. The operationscapability of the product helps storage administrators resolve problems faster and improve capacity utilization
by providing a full picture of NetApp storage resources. With just a few clicks, administrators can drill downto detailed storage system information. And by replacing repetitive, time-intensive tasks with policy-based
automation, they become more productive.
Role-based access control on the centralized console makes it possible for server and database administrators
to perform self-service provisioning. Because these tasks are only performed within the limits of policiesdefined by IT architects and based on company business requirements, the system remains stable, efficientlyconfigured, and under control. Policies that can be ascribed to datasets include capacity, storage reliability,
space provisioning requirements, access mechanisms and security settings.
Another valuable dimension of operations management is monitoring and analysis of reporting.
With OnCommand, you can continuously monitor and analyze the health of your storage environment,
informing customers about and can thus maintain visibility of what is being deployed and how it is beingutilized. This improves both storage capacity utilization as well as administrator efficiency.
By streamlining provisioning, OnCommand software enables customers to increase operating efficiency andeliminate hands-on complexity, and simplify by streamlining provisioning with OnCommand. Complexity ofthe underlying storage can be removed for easier down-stream administration. OnCommand allows you to
provision and protection of protect data at the same time — the moment you provision storage, you protect it. No additional steps or time are required
OnCommand: Operations and
Provisioning
RBAC
Policies
Monitoring
Reporting
Provision and protect
at same time
Assign preconfigured
services to datasets
View reports to identify potential
storage savings from deduplication
NetApp Confidential 21
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
OnCommand increases operating efficiency and eliminate hands-on complexity by streamlining provisioning.It allows the ability to provision and protect data at the same time — no additional steps or time are required.
Provisioning with OnCommand allows the automation of complex provisioning processes. Services can bedefined granularly by the storage architect, and then be easily and consistently selected by down-stream
administrators
To maximize use of your resources, OnCommand automates NetApp storage efficiency features includingthin provisioning and primary data deduplication. Automation This eliminates unnecessary and wasteful over-
provisioning and provides storage only when needed. In addition, during the provisioning process,
OnCommand can automatically select the best resource to meet a request. As resource pools approach fullallocation, the system can issue alerts also automatically alert, and suggest ways to increase available space.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
OnCommand software simplifies the process of protecting enterprise data by enabling administrators to group
data into datasets and apply preset policies to the datasets. It automatically correlates datasets and underlying physical storage resources, so administrators do not need to think in terms of the storage infrastructure.
OnCommand software helps protect data by providing administrators with an easy-to-use managementconsole that they can use to quickly configure and control all SnapMirror, SnapVault, Open SystemsSnapVault (OSSV), and SnapManager operations. Administrators can apply data-protection policiesconsistently, automate complex protection processes, and pool backup and replication resources.
A simple dashboard provides an at-a-glance view of comprehensive data-protection information, including
information about unprotected data. The software enables administrators to apply predefined policies to thedata, thus minimizing the potential for error. OnCommand software also provides e-mail alerting to enable
issues to be analyzed and corrected before they significantly impact data protection.
OnCommand: Protection
Grouping of similar
requirements
Preset policies
A simplified process
Alerts
Protection status
at a glance
NetApp Confidential 22
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The Storage Service Catalog, a component of OnCommand software, is a key service-automation
differentiator for NetApp. It enables storage-provisioning policies, data-protection policies, and storageresource pools to be integrated into a single service offering that administrators can choose when provisioningstorage. The catalog not only automates much of the provisioning process but also automates a variety ofstorage-management tasks that are associated with the policies.
The catalog provides a layer of abstraction between the storage consumer and the details of the storageconfiguration, creating ―storage as a service.‖ The service levels that are defined with the catalog specify andmap policies to the attributes of the pooled storage infrastructure. The higher level of abstraction between
service levels and physical storage enables elimination of complex, manual work and encapsulates storageand operational processes together for optimal, flexible, and dynamic allocation of storage.
What is the Storage Service Catalog?
Included free with
OnCommand®
Enables storage as aservice
Automates manual
processes
Unique to NetApp
NetApp Confidential 24
Subscriber
Storage
Architect
Self
Service
Portal
Self
Service
Portal
(storage)
Data Center Orchestration
A p p l i c a t i o n
S e r v e r
N e t w o r k
Service Catalog
A p p l i c a t i o n
S e r v e r
N e t w o r k
R e s o u r c e P o o l
P o l i c i e s
M e t r i c s
Service
Technology View Logical View
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NetApp is developing an ecosystem that delivers the value that partner products can provide, while assuring
flexibility and choice for customers. The result is a solution that addresses the unique needs of the end-customer environment. Key technologies that enable this differentiation are an open API and a free SoftwareDevelopment Kit (SDK).
The companies that provide IT integration within the NetApp ecosystem represent some of the best-knownnames in the industry (such as virtualization-management solutions from Microsoft, VMware, and Citrix andenterprise-management frameworks from BMC Software, CA, HP, IBM, and Fujitsu).
The technologies that differentiate NetApp are an open API and a free Software Development Kit (SDK). The
OnCommand SDK and the open APIs provide partner platforms with a tighter integration at a higher storage-abstraction layer, thus enabling policy-based automation for protection and provisioning tasks on NetApp
storage
The goal of NetApp’s partnerships and of NetApp’s integration with management and orchestration vendorsis to enable customers to manage their infrastructure from end to end — applications, servers, networks, and
storage.
This strategy enables customers to choose the ―right solution‖ for their problem and evolve their solution o ver
time.
APIs and SDK: Choosing the Right Solution
NetApp Confidential 26
In-HouseManagement
Tools
Enterprise
Management
V i r t u al i z a t i on
M
an a g em en t
C u s t o m
M a n a g e m e n t
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The diagram illustrates how an orchestration framework, the Storage Service Catalog, and analysis capability
can be integrated to enable automated end-to-end management of shared IT infrastructures.
An application administrator requests storage at the high-service level.
The request moves to the OnCommand Storage Service Catalog, where predefined policies pair datasets withservice levels for performance, availability, efficiency, and protection. To ensure capacity savings, the process
can include deduplication.
Defined availability and protection levels automatically create backup and replication actions.
Similarly, newly provisioned VMs trigger the policy-based SLAs that are used with physical resources.SnapManager for VMware and SnapManager for Hyper-V enable the integration.
Finally, Insight analysis products track changes, collect performance data, and send alert messages in regard
to significant events and threshold status.
Storage and Service Efficiency
Application Administrator
I need two 800-GB Oracle
instances at the Gold service level.
Data Center Orchestration Framework
SnapVault SnapMirror
Service
Catalog
Service
Analysis
Service
Measurement
Two 800-GB LUNs
GOLD SLA
Two VMs with
11Gb on tier 1
servers
Policy
Infrastructure
Two 800-GB LUNs
Thin provisioning
Deduplication
NetApp Confidential 28
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This example illustrates the concept of efficiency, the key value that OnCommand management software
provides.
The example shows how a storage service that is built with OnCommand policy, automation, service-catalog,
and virtualization-awareness capabilities, is coupled with NetApp analysis products, and is integrated with a portal or orchestration platform delivers the service and storage efficiency savings that are required by ITorganizations today.
Storage
Efficiency
Service
Efficiency
Storage and Service Efficiency
Application Administrator
Data Center Orchestration Framework
SnapVault SnapMirror
Service
Catalog
Service
Analysis
Service
Measurement
Policy
Infrastructure
NetApp Confidential 29
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
BMC has implemented a software adapter that uses the NetApp open APIs and the NetApp SDK and that
takes full advantage of the Storage Service Catalog to enable full-stack, automated provisioning from BMC’sBusiness Service Management (BSM) product.
The slide illustrates how a system administrator can automatically provision VMs and storage at a particularservice level. Because service levels are defined through the service catalog, the provisioning processautomatically allocates the storage and protection processes.
This example depicts the integration of a management platform with NetApp management software to enableservice delivery of storage and to leverage NetApp efficiency technologies.
BMC and NetApp Automation
NetApp Confidential 30
Network Layer
IT Services
BMC Atrium
Adapter
NetApp
Management
Logical Pool of Storage
Provision two VMs,100
GB each at GOLD SLA
Provision
two VMs
Disaster recovery and off-site replication
Thin provisioning
Deduplication
RAID-DP
Provision two 100
GB at GOLD SLA
Atrium manages NetApp storage:
Full-stack automated
provisioning
Storage that is automaticallyprovisioned and protected by
defined SLAs
Defined SLAs that
automatically deliver storage
and service efficiency
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Manage the relationship between hosts and storage arrays and reports on:
– Capacity
– Operational recovery
– Replication service
Impact Management
Model and identify the impact of storage availability on business services Integrate OnCommand Insight data to enable helpdesk tracking and risk mitigation for storage services
Connector availability?
Target Q3FY11
The connector is bi-directional
Import application and business-unit (business line) information exists in the CMDB to OnCommand Insight.
The information can be used for violation management, capacity management and chargeback (assume thecustomer has capacity manager license)
How frequent is the update of SIM?
Near Real-Time, upon OnCommand Insight SNMP trap generation?
BMC and NetApp Service Management
Business Service Management
Provide full dependency mapping
from the business services to the
storage services
Enable automatic remedy ticket
creation for business service as a
result of a storage issue
Provide extension of business-
level impact analysis into storage
Connector
Bidirectional update for Atrium
CMDB and OnCommand Insight
data warehouse
NetApp Confidential 31
Infrastructure
OnCommand
Insight
Server
OnCommand
Insight
Server
OnCommand Insight
Data Warehouse
AtriumCMDB
Remedy
BMC
SIM
Service Desk
Storage
Admin
S N M P T r a p
OnCommand
Insight
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The extraction of data from OnCommand Insight Data warehouse, transformation into a service modeland the loading of the service model to the CMDB
Importing Applications and business units information from CMDB to OnCommand Insight
What will I have to do in the CMDB?
Assuming no conflicts exists with the data everything will be done automatically When conflict occurs (ex. server information cannot be found in the CMDB) the CDMB administrator
will have to resolve it.What about SIM integration
CMDB administrator will require to look at traps captured from OnCommand Insight, using server &storage information exists in the trap find the storage service in the CMDB and change its status – thisrequire manual configuration
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Most of the components of OnCommand software are delivered with NetApp hardware.
System Manager, which provides basic storage-system management, is ideal for customers who have only afew controllers. The 2.0 version, which was available as of August 2011, is included with the purchase of a
storage system.
Similarly, OnCommand management software is provided with NetApp storage systems. OnCommand
software is recommended for use with multiple controllers, to enable efficient management of largerenvironments. It was available as of September 2011. OnCommand and System Manager are included withinthe Data ONTAP Essentials bundle.
To take full advantage of virtualization-aware capabilities, customers must purchase the SnapManager suite,which includes entitlement to the SMVI and SMHV products.
Finally, NetApp analysis capabilities are provided by OnCommand Insight products (formerly OnCommandInsight and Akorri). The Insight products have capacity-based enterprise licenses, available separately.
With NetApp OnCommand you have a single unified approach to manage your storage simply, efficiently,
and flexibly. OnCommand helps you better control your data and storage, automate common and complextasks, and better analyze how to evolve your capacity to meet business needs and help lower costs.OnCommand delivers on Storage AND Service efficiency.
Using automation and analytics OnCommand can help you lower operational costs and better plan yourgrowth which can reduce your IT spend by as much as 50%.
Finally NetApp storage and OnCommand management software provides the ideal shared storageinfrastructure for the virtualized data center.
SnapVault is the NetApp native Data ONTAP backup, recovery, and archive solution. It is ideal for use with
NearStore near-line storage.
SnapVault software:
Doesn’t require a NearStore Personality License or NearStore hardware Is designed to address the pain points that are associated with tape
Uses intelligent data movement, transferring only the changes that are made at the block level During data transfers, reduces traffic across the network Reduces the impact on production systems
Can perform backups more frequently, because less data is backed up Is based on Snapshot technology Reduces the amount of backup media that is needed
SnapVault software works in controller-to-controller environments and in open systems environment (OpenSystems SnapVault) and is usually implemented with a NearStore ATA-based secondary backup system.
SnapVault software can be used in disaster-recovery scenarios, if used in conjunction with SnapMirror products. SnapVault software does not create read/write copies; and data becomes active only after it is
restored to a FAS system.
SnapVault
What Is SnapVault Software?
A data protection solution for heterogeneous
storage environments
Software that performs disk-to-disk backup and
recovery, which is ideal for use with NearStorenear-line disk storage
A solution that is designed to address the pain points
that are associated with tape:
– Intelligent data movement that reduces network traffic
and production-system impact
– Frequent backups that ensure superior data protection
– Use of NetApp Snapshot technology to significantly
reduce the amount of backup media that is needed
NetApp Confidential 36
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The traditional approach is to back up to tape — using backup software such as Veritas and Legato. In this
case, the backup is performed via file-level transfers. To back up a laptop, backup software such as ConnectedTLM or Veritas NetBackupPro is used to transfer block-level changes.
The tape solution can be used to back up heterogeneous storage and operating-system and applicationenvironments. Software is installed on a backup server, and tape is attached to the server, either directly orthrough a storage network.
Full backups back up all data. Typically, full backups occur on weekends. Incremental backups usually backup only changed files. Incremental backups occur in-between full weekends (for example, nightly). Remote
backups are performed within the infrastructures that are located in remote offices. To enable disasterrecovery, tapes are sent offsite.
Traditional Backup: Challenges
Challenges Inability to hit ―shrinking‖
backup windows – storage
growth and 24x7 demand
Restores that require too much
time and frequently fail
Remote office backups that are
challenging and prohibitively
costly
Increasing operating costs for
management and media
Infrequent backups
Data Center
Offsite LocationRemote Office
Tape
Remote Office
Tape
UNIX
Servers
Windows
Servers
Heterogeneous
Storage
Tape
Library
NDMP
FAS
Servers
NetApp Confidential 37
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
When the SnapVault solution is used, a full backup of all systems is performed on the NearStore system.
Thereafter, all backups are incremental, and only changed blocks are stored on disk.
Storing only changed blocks dramatically reduces the amount of information that is stored on disk. For
backups from one NetApp system to another, only changed blocks are sent across the network and onlychanged blocks are stored.
The data that is stored, including the data from all incremental backups, is in file format and can be viewed asa full backup image. Whether you want to view a backup that was performed four hours ago or four days ago,you can quickly locate the backup and have a full view into the environment as it was at the time the backup
was performed. You do not need to backtrack step-by-step to view the data or locate the information that youneed.
Both the tape process and the SnapVault process perform incremental backups, but the tape process performsthe backups by file, and the SnapVault process performs the backups by block.
A SnapVault incremental backup is the equivalent of a full backup. For each day, only the changed blocks aremoved, but all of the previously backed-up blocks are active. So, every day, the full file system is visible.
How do you restore data?
Assume that you need to restore data the night before your full (weekly) backup is to be performed. With thetraditional solution, to restore the data, you must apply seven incremental (nightly) backups. With theSnapVault solution, each incremental backup is full (because all previously backed up blocks are active and
accessible), so data can be restored via a one-step process, rather than via a multiple-step process.
SnapVault Backup: Solution
Accelerated backups
Accelerated and guaranteed
restores
Significantly less manual
intervention and support
Network efficient backups—
only changed blocks sent
Media efficient backups—only
changed blocks and
incremental backups stored
forever
Extremely fast and granular
restores from an online disk
More frequent backups—as
often as hourlyUNIX
Servers
Data Center
Windows
Servers
FAS
Servers
UNIX
Servers
SnapVault
Block-level
incremental
backups
Each
Incremental
backup
is a full file
system Image
Remote OfficeRemote OfficeRemote Office
Heterogeneous
Storage
Windows
Servers
Features and Benefits
NetApp Confidential 38
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
With tape, to determine whether a backup is good, you must read the whole backup tape. And, you must hopethat the tape is readable. Tape is a volatile medium that is easy to damage. SnapVault software saves the
backup as a file system that can be read, written to, mounted, and browsed. You can view the SnapVault file
structure and see the data that has been backed up.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Administrators set up backup relationships, schedules, and retention policies. For example, a source might
create a Snapshot copy every 30 minutes and retain the four most current copies, one from six hours ago andone daily starting 24 hours ago. The SnapVault system might move changed blocks from only the daily 24-hour Snapshot copies. A one-to-one correlation between the Snapshot copy policy at the source and theSnapshot copy policy at the destination is not required.
The SnapVault system retains fewer Snapshot copies per day or per week but retains them longer.
In regard to SnapVault operations:
Multiple qtrees can be backed up to one volume — if the qtrees have the same schedule and policy
SnapVault is qtree-based in native NetApp environments, so it always backs up to a qtree A job moves data from the SnapVault primary location (source) to the SnapVault secondary location
(destination) One job can pull data from multiple SnapVault primary locations
How SnapVault Backup Works
Administrators set up a backup relationship, backup
schedule and retention policy.
Multiple qtrees can be backed up to one volume—if the
qtrees. A backup job is initiated based on a backup schedule and
can back up multiple systems.
After the initial (level 0) transfer, all backup jobs are
incremental.
A backup job moves data from the SnapVault primary
location to the SnapVault secondary location.
– Controllers transfer changed blocks to the SnapVault
secondary location.
– Open systems transfer changed files to the SnapVaultsecondary location.
NetApp Confidential 39
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Here is an example of a baseline transfer. At some point, all of the active data on the primary system (source)
needs to be moved to the secondary system (destination). Because the transfer is based on a Snapshot copy, asthe transfer is processed, changes are occurring and production is continuing in the source. Therefore, theremay be more Snapshot copies on the source than on the destination.
After the baseline transfer is completed, you can create a Snapshot copy and interact with the file system(view, browse, mount LUNs, and so on). During this time, changes continue on the source. Because theSnapVault secondary data is based on a Snapshot copy, the data never has to be quiesced — because it wasquiesced before the Snapshot copy was created.
The destination does not request all of the blocks that are within the Snapshot copy; rather, it requests only thechanged blocks. The destination and source views of the data are unique. The destination view is more
backup-focused. Production on the source system is not affected by the data transfer of the SnapVaultoperation.
SnapVault Operations
Faster backups
Disk and network efficiency
Online access to hundreds of full backups
C’C’
Primary Storage
SnapVault
SnapVault
Backup 1SnapVault
Backup 2
A B C D
SnapVault
Backup 3
NetApp NearStore
Data blocks
Snapshot 1 Snapshot 2
A B C C’ D
Snapshot 3
Active
LUN-File System
Primary Storage
Baseline transfer on first backup
1 Transfer only incremental changes
2 Store only incremental changes
3 Recreate full copy of data
4Delete Snapshot copies without impacting
others
NetApp Confidential 40
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
SnapVault software protects the data on a SnapVault primary system by maintaining multiple read-only
versions of the data on a SnapVault secondary system. The SnapVault secondary system is a data storagesystem (such as a NearStore system or a controller) that runs Data ONTAP.
First, a complete copy of the dataset is pulled across the network to the SnapVault secondary system. Theinitial (baseline) transfer may require some time to complete, as the transfer duplicates the entire sourcedataset (much like a level-zero backup to tape).
Establishing the baseline can be can be a time-consuming process time-consuming process, especially with alarge file system and a low throughput pipe. For example, to transfer a 2-TB system over a 128-KB line can
require can require months. In such a situation, many customers make the baseline transfer by shipping theSnapVault system side-by-side with the primary system and making the baseline transfer locally. Then, they
ship the SnapVault system to its final destination and start replicating the changed blocks on the Snapshotcopy. Another option is to mirror the data, the data, place it on tape, and restore the tape at the destination.Baseline transfers can also occur over the wire.
Each subsequent backup transfers only the data blocks that have changed since the previous backup(incremental backups or incremental backups forever). For some NetApp replication relationships, the
baseline transfers were made eight or nine years ago, and all backups since that time have been incrementalBlock-level time have been incremental. Block-level incremental backups are available for both controller-to-
controller SnapVault and Open Systems SnapVault, although the process for determining which blocks havechanged is quite different.
When the initial full backup is performed, the SnapVault secondary system stores the data in a WAFL file
system and creates a Snapshot copy of the data. A Snapshot copy is a read-only, point-in-time version of adataset. Each Snapshot copy can be thought of as a full backup (although it consumes only a fraction of thespace). A Snapshot copy is created each time a backup is performed, and a large number of Snapshot copies
can be maintained, according to a schedule configured by the backup administrator. Each Snapshot copyconsumes an amount of disk space that is equal to the differences between it and the previous Snapshot copy.
SnapVault Backup Flow Diagram
NetApp Confidential 41
SetupInitial Full
Backup
Incremental
BackupSnapMirror Tape Backup
Backup images
are in file format
on disk.
Backups are
Immediately
and easily
verifiable.
The backup
provides a
reliable and
redundant form
of disk storage.
Incremental
backups are
created forever.
Changed blocks
are transferred
for controllers.
Changed blocks
or files are
transferred for
open systems.
Only changed
blocks are
stored for all
systems.
All backup
copies are full
images.
SnapMirror
and/or
SnapVault
secondary to
remote location
using
SnapMirror
Software for
disaster
recovery
Use the NDMP
backup
application to
back up data to
tape at any
time.
No backup
window is
needed.
Tape resources
are centralized
and used
efficiently.
SnapMirror toTape
Local Copy
LAN Copy
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Data protection of the secondary system is common. A SnapVault secondary system can be protected byeither backup to tape or backup to another disk-based system (such as to a NearStore system). To back up to asecondary SnapVault system (such as to a NearStore system), you can create a volume-based SnapMirrorrelationship. To back up a secondary system to a tape library, you can use SnapMirror technology to mirror to
tape or perform an NDMP backup to tape.
SnapVault software and SnapMirror technology are built on the protocol that transmits blocks across theWAN. They are designed specifically for WAN links. The progress of a transfer is recorded on both the
source and the destination. Therefore, if a source and the destination. Therefore, if a transfer is interrupted, it
does not need interrupted, it does not need to be restarted be restarted. The transfer stream includes numerouscheckpoints, point at which the transfer can be restarted. At transfer stream includes numerous checkpoints,
point at which the transfer can be restarted. At most, a transfer might, a transfer might have to repeat thetransfer of the transfer of a couple of hundred blocks. The transfer takes as much bandwidth from the pipe as
it can obtain but, as needed obtain but, as needed, can be throttled.
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Open Systems SnapVault extends the functionality of native SnapVault software to heterogeneous
environments. To back up data from Windows, Linux, or Solaris servers or from commercial UNIX platforms(HP-UX or AIX), you can use Open Systems SnapVault. It is an agent on the host that can transfer data to aSnapVault system and create backups that are based on Snapshot copies. The source system with the clientdoes not have Snapshot copies or WAFL, so you must do some different work in that case.
The delta can be managed in either of two ways:
The host sends an entire file across the wire and allows the SnapVault controller to figure out which blocks have changed. Because the previous and current versions of the data are stored locally, the
controller can easily perform the comparison. This option requires high bandwidth. The host can maintain a database of each 4 KB of the file’s data and run checksums to determine which
blocks have changed. The host sends only the 4-KB chunks that are different. This option requires muchless bandwidth but requires a very large CPU load on the source system.
In a typical remote office, the CPU-intensive option is fine, because the office probably shuts down at night.
Because the server is sitting idle, it can run the checksums.
Various host agents that are available:
BakBone, the original creator
The NetApp version of Open Systems SnapVault Syncsort
CommVault
Syncsort is the only host agent that can perform bare-metal restores. With this type of restore, you place a
floppy in the system and boot from the floppy. This method restores the entire operating system over the wirefrom SnapVault software. Other host agents need an OS to recover into. Install Windows, then install OpenSystems SnapVault, and then start the restore.
Open Systems SnapVault
Is an agent on the host
Creates backups based on Snapshot copies
Has these options:
– The host sends files and the SnapVault system determines
which blocks have changed and stores them.
– The host monitors block changes and sends only the changed
Open Systems SnapVault and regular SnapVault use the same process for defining mappings between
primary directories and secondary qtrees. In both cases, the schedule is set up on the secondary system.
However, a major difference is that, in an Open Systems SnapVault environment, the file system is scanned
for changed files, and checksums are performed on the changed files and their associated data blocks. Thechecksums are then compared to the checksums from the last backup, and changed blocks are sent to thedestination SnapVault system.
Unlike with Data ONTAP, with Open Systems SnapVault, there is no Snapshot copy on the primary system.
Phases of transfer:
Phase I is a resource-intensive process. The time required for the process varies, depending on the size ofthe dataset. The process can be lengthy. If block-level incremental (BLI) backups (or checksum
calculations) are enabled, the period of time is extended. During a baseline transfer, If BLI is enabled andset to high, checksums are performed on every 4-KB chunk of every file. For an incremental backup, ifBLI is enabled and set to high, checksums are performed on every 4-KB chunk of every file that haschanged.
Phase II transfers the dataset to the SnapVault secondary (destination).
Phase III occurs when acknowledgements are sent to the host system. The acknowledgements confirm the
dataset transfer.NOTE: The checksum calculations, the local Open Systems SnapVault agent database (history, metadata),
and a temporary directory are all stored on the primary system. Allow sufficient space for all of these items onthe primary system in Open Systems SnapVault.
SnapVault Backup Process:
Open Systems SnapVault
Open Systems SnapVault uses the local, internal
database for relationship information, metadata,
indexing, and block-level incremental (BLI) data, so
you need storage space on the open system. Read-only Snapshot copies are vaulted on the
secondary storage system.
Phases in the transfer process include the following:
– Phase I: The file system is scanned, and the directory
structure is built
– Phase II: Datasets are transferred.
– Phase III: Acknowledgements are sent, and Softlock
negotiations occur.
NetApp Confidential 44
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
A SnapVault backup that is based on Data ONTAP differs from an Open Systems SnapVault backup in the
following ways:
For SnapVault backups, changed blocks are based on Snapshot copies. For Open Systems SnapVault
backups, changed blocks are based on host or vault monitoring. SnapVault backups are qtree-based. Whether the source data is a directory, subdirectory, or NetApp qtree,
it is backed up to a qtree. For Open System SnapVault backups, tape restores are more complicated. When the SnapVault backup is based on Data ONTAP, restoring from a native tape to a native primary
system is a one-step process. The system can skip the destination and return directly to the source. WithOpen Systems SnapVault backups, restoration is a two-step process. The backup must be restored to a
NetApp box and then pushed form the box to the original source.
Controller and Open System Comparison
Controller SnapVault Open Systems SnapVault
Incremental
Backup
Snapshot technology is used to
transfer changed blocks.
BLI is used in the primary system to
transfer changed files.
Source Data All non-qtree data can be backed up
to one qtree.
The directory or sub-directory can be
backed up to one qtree.
Tape Restore
A one-step process restores from
tape to the primary system.
A two-step process restores from
tape to the secondary system and
then restores to the primary system.
Snapshot on
Primary System
A Snapshot copy is created or an
existing Snapshot copy is used on
the SnapVault primary system.
The live file system is backed up; a
Snapshot copy is not needed.
NetApp Confidential 45
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Worldwide, regulations dictate the way businesses store information. In addition, enterprises store
information securely to protect their intellectual property and to defend themselves against litigation. Externalregulations and internal corporate-governance requirements significantly impact data-storage needs. Theregulations and requirements can be divided into two categories: 1) data permanence and 2) privacy andsecurity.
Data permanence can be defined as the need to store data in a form that can be proven not to have changedover a period of years. Data-permanence requirements specify data retention elements such as:
Immutable storage: referred to as write once, read many (WORM) storage, which is storage from which
data cannot be deleted or modified for the duration of the retention period Data authenticity: the ability to prove that data was written on the media accurately the first time Data integrity: the ability to prove that data has not been altered since it was first written and that the
integrity of the data will be protected for the retention period Data replication: the storing of a data copy that is separate from the original copy to ensure data
availability, even in the case of disaster
The following regulations control authorized access to private user and company data:
Authorization: allowing data access to authorized individuals
Access controls: limiting individual rights to perform certain actions with the data Encryption: protecting the privacy of data in transmission or at rest Auditing: keeping a log of who did what done what to the data when Secure deletion: deleting data so that it can never be recovered
Typically, enterprises are subject to a variety of regulations. These regulations may mandate a matrix ofrequirements and cut across data permanence, privacy, and security. Enterprises should take a big-picture,long-term view of compliance storage, rather than focusing on storage infrastructures that meet only current
requirements.
Compliance Drivers and Requirements
Litigation Protection Regulations
SEC 17a-4
Sarbanes-Oxley
NASD 3010/3110
DOD 5015.2
S B 1386
Gramm-Leach-
Bliley
HIPAA
Data Permanence
Immutable storage
Data authenticity
Data integrity
Data replication
Privacy and Security
Authorization
Access controls
Encryption
Auditing
Secure deletion
Compliance RequirementsMarket Drivers
Most companies are subjected to multiple regulations
Basel II
Check 21
Patriot Act
21 CFR Part 11
UK Data
Protection Act
NetApp Confidential 48
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
SnapLock compliance software is a production-side compliance solution. Two licenses are available with
SnapLock:
The SnapLock Compliance license allows the removal of data only after the compliance window (as
determined by the compliance clock) is completed. The data is destroyed through physical destruction ofthe drives.
The SnapLock Enterprise license allows disks to be erased. This type of compliance is used for business-compliance rules, not for regulatory compliance. The physical container that holds the data is destroyed.The lock cannot be undone, rather it must be destroyed. The process destroys data.
To manage data:
1. An administrator creates a SnapLock volume or aggregate (a physical layer container)
2. The container is shared, and a copy of the file is moved.3. The most recent access time is changed to reflect the retention date, and the permissions are set to read
only.
4. The file is stored until the retention date arrives.
This process is intended for only structured and semi-structured data sets, so an application can control the
details.
SnapLock Usage: Process
1. Use the SnapLock Compliance or SnapLock Enterprise
license to license SnapLock software.
2. Create a SnapLock volume or aggregate, but realize that
you cannot convert a volume to a SnapLock volume.3. Share or export the SnapLock volume.
4. Copy the write-enabled file over NFS or CIFS.
5. Change the last access time to reflect the retention date.
6. After the file is stored on the SnapLock volume, use a
script or an application to change the permissions to read
only.
NetApp Confidential 49
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Stress to partners and customers that NetApp systems cannot undo the creation of a SnapLock volume. Once
created, a SnapLock volume is permanent. Customers and partners should follow the best practice guidelinesfor creating and maintaining SnapLock volumes, as detailed in the following paper:http://www.netapp.com/tech_library/3263.html .
Changes to volume commands for SnapLock usage include the following:
Data ONTAP 7G introduced the capital L switch to the vol and aggr commands for creating compliantdata stores.
Starting in Data ONTAP 6.4.1, once a SnapLock volume is created, it cannot be destroyed. Because vol copy essentially destroys WORM data, a copy is not allowed to a destination SnapLock
volume.
You can lock SnapLock aggregates and traditional volumes. A standard aggregate cannot contain a flexible
volume that contains only compliance data. Only compliance aggregates can contain flexible volumes thatcontain compliance data.
SnapLock Usage: Technical Details
The vol command has been changed:
– vol create: use the –L switch to specify SnapLock
software.
– The System Manager browser-based administration tool
does not support the SnapLock option.
In Data ONTAP 7-mode aggr is the command.
1. Use aggr create volume_name–L to create
SnapLock aggregates.
2. Create flexible volumes on the SnapLock aggregate.
The type of SnapLock volume is dependent on type of
license: Snaplock Compliance or Snaplock
Enterprise.
NetApp Confidential 50
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This graphic shows that the cost of a solution increases as the recovery point objective (RPO), the point towhich you want to be able to recover, becomes closer and closer to real time (or immediate). The concept isvalid, and most businesses have multiple types of data with multiple priorities along this curve.
Not many environments have a business need for continuous operations for any class of data. Financialenvironments are the most common that have a real-time recovery point. Some types of companies exist forwhich online transaction processing (OLTP) requires data that is current up to the last I/O operation.
Recovery Objectives
5
Source: Deloitte and ToucheCurrency of Data
Weekly
Backup
Daily
Backup
Remote Vaulting
Remote Journaling
Database Replication
Mirroring
Hot Standby
ContinuousOperations
Days Hours Minutes Seconds
C o s t a n d A v a i l a b i l i t y
Cost
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Many reasons exist for unplanned downtime, including operational failures, application failures, componentand system failures, site failures, and regional disasters. Of that unplanned downtime, 40% is caused byoperational failures (operator errors), another 40% is caused by application failures, and the remaining 20% iscaused by component and system failures, site failures, and disasters. Storage system failures account for anegligible percentage of all system and component failures as a result of the storage resiliency features thatare built into all of the most widely used systems.
The types of failures are summarized as follows:
Operational failures: The proliferation of storage silos, multiple architectures, and products andtechnologies that are not interoperable makes IT infrastructures increasingly complex. This complexityoften results in a rise in operator errors.
Application failures: As new functionality is added to applications that must support multiple underlyingarchitectures, complexity increases, and so does the likelihood of an application failure.
Component and system failures: Failures in system components often result in long recovery times and indata corruption. A high level of storage resiliency is essential to preventing downtime and loss of data.
Site failures and regional disasters: Of the different types of unplanned downtime, site failures andregional disasters are the least likely to occur, but they are responsible for the highest costs when they dohappen.
Causes of Unplanned Downtime
6
Probability of Occurrence
Component
and System
Failures
Controllerfailure
Host bus
adaptor (HBA)
and port
failure
Disk failure
Shelf failure
FC loop failure
10 %
Site Failures
Terroristattacks
HVAC failures
Power
failures
Building fire
Plumbing
accidents
Architectural
failures
Planned
downtime
Regional
Disasters
Electric gridfailures
Natural
disasters:
– Floods
– Hurricanes
– Earthquakes
Operational
Failures
People andprocess issues
Infrastructure
changes
Configuration
and problem
management
40%
Application
Failures
Bugs Performance
issues
Change-
management
process
40% 7 % 3 %
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Organizations used to rely on tape for most of their backup and recovery needs when they needed torecover a previous copy of data after a failure occurred.
Because the goal was never to go down, IT organizations put a heavy emphasis on high-availability (HA),clustered solutions.
Only the most mission-critical applications were protected against disasters with synchronous replicationsolutions. Because of limited budgets, the rest of the applications were not covered under a disaster-recovery plan. Off-site shipment of tapes was the last level of protection for these applications.
Present
Increasing data-center complexity results in an increase in operator errors, which, in turn, leads to
increased downtime. Tape is no longer adequate as a backup and recovery medium. It takes too long to back up and recover
and doesn’t meet the requirements of increasingly strict SLAs.
Storage system failures account for a negligible percentage of all system and component failures as aresult of the storage resiliency features that are built into all of the most widely used systems.
The need for disaster recovery solutions is increasing because of terrorist activities, recent disasters, andthe need for compliance with the Sarbanes-Oxley Act (SOX).
As customers continue to consolidate expensive UNIX servers onto commodity clusters, they look at aconsolidated disaster-recovery plan for a broader set of applications.
Component
and System
Failures
Site Failures Regional
Disasters
Application
Failures
Operational
Failures
The State of the Market
7
Emphasis on
high-
availability
(HA) clustered
solutions
Reliance on tape for all backup and
recovery needs Disaster-recovery protection for
only the most mission-critical
applications
Cost and complexity: barriers to
widespread adoption
P a s t
P r e s e n t
Increasing data-center complexity
that results in downtime because
of operator errors
A need for faster disk-based data
recovery
Increasing need for HA and
disaster recovery solutions
Server and storage consolidation
A need to protect a broader set of
applications cost-effectively
Well
addressed by
the top
enterprise
storage
vendors
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This is where NetApp products roughly fit on the same style of curve. This is not a precise mapping. What isrelevant is that NetApp has products that allow customers to reach any level of recovery point that thecustomers need.
NetApp Software Architecture
8
C o s t
Cost
Block-Level
Incremental
Backups
AsynchronousReplication
LAN and WAN
Clustering
Continuous
Operations
Synchronous
Replication
Application
Recovery
Daily
Backup
Snapshot
Copies
Synchronous SnapMirror, SyncMirror
Software, and Asynchronous SnapMirror
MetroCluster
Synchronous SnapMirror
and High Availability
SnapVault
Software
Low-Level SLA Medium-Level SLA High-Level SLA
SnapRestore
Software
A v a i l a b i l i t y
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
This is a representation of a hypothetical disaster-recovery architecture. Many NetApp customers have thesecascading environments, especially large customers with multiple sites around the world.
As you see, multiple classes of data are handled in different ways. Some of the data is immediately mirrored by using SnapMirror technology directly to a remote site. Some of the data is stored by SnapVault softwarelocally. Eventually, all of the data is mirrored by SnapMirror technology to the remote site.
The example also shows the use of Open Systems SnapVault, MetroCluster, regular clusters, and some stand-alone systems. All of these are mirrored to a third site where they have their backup structure. This entireoperation can be performed by using NetApp technology, which provides a single-vendor solution.
NetApp uses the same design internally. The primary NetApp disaster-recovery center is in Sacramento,California, 75 miles away in a straight line from NetApp headquarters in Sunnyvale, California. Theadvantage is that the primary NetApp disaster-recovery center is outside the most dangerous earthquake zone,so it is theoretically safer. Everything gets replicated to the Sacramento site, and then the most critical datagets replicated to Amsterdam, the NetApp European headquarters. From there, it gets replicated to Bangalore,India, which is the largest NetApp Asia-Pacific office, and from Bangalore it is replicated back to NorthAmerica to the Research Triangle Park facility. Each of those sites has its own primary data, so that primarydata is also replicated out to the other sites.
Disaster-Recovery Architecture
9
Remote Data Center
Major Data Center Disaster-Recovery Site
Network IP
or FCSnapVault
Software
Clustered and
Nonclustered
NetApp Servers
UNIX
Server Windows
Server
Windows NT
Server
FAS System
with NPLBackup
Server
Tape
Library
MetroCluster
Windows
Server
UNIX
Server
FAS Systemwith NPL
SnapVault
Software
FAS System
with NPL
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
First, look at SnapMirror software as a flexible solution. SnapMirror software is primarily used as a disasterrecovery solution. SnapMirror software replicates unique data blocks at high speeds over LAN, WAN, or FCnetworks to minimize bandwidth utilization and provide protection against unplanned downtime.
Customers now use it for business intelligence, data distribution, and development and testing to maximize
utilization of their disaster-recovery site for better ROI. This is enabled by FlexClone technology, whichcreates instantaneous, space-efficient clones off your SnapMirror copy on the disaster-recovery site to runyour other business activities without impacting your production-site operations.
SnapMirror software leverages the NetApp Unified Storage Architecture, which means that customers can usea single product that can replicate between tiers of NetApp storage (which can be FC systems on the primaryand SATA systems on the disaster-recovery site) and between third-party storage by using NetApp V-Seriessystems for investment protection.
Customers can also use multiple replication modes (synchronous, semi-synchronous, and asynchronous) totune their RPO to meet their business needs.
SnapMirror software also supports all applications and protocols (including FC, iSCSI, NFS, and CIFS).
All these benefits of SnapMirror software apply equally well to virtual and traditional physical environments.
Flexible Disaster Recovery
10
Protects and accelerates business with
60% lower TCO:
One to many and many to one
Any platform to any platform: – Any FAS system
– FC or SATA disk
Replication between NetApp and third-
party storage (through V-Series
systems)
The ability to tune to meet business
requirements: synchronous, semi-
synchronous , or asynchronous
Support for all applications and
protocols
MetroCluster
ApplicationIntegration
SnapMirror Software
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The benefits of SnapMirror software can be realized in virtual environments regardless of the vendor. NetAppworks with VMware, Microsoft Hyper-V, and Citrix XenServer.
Customers can extend the power of SnapMirror software to virtualize storage environments, for example,with VMware Site Recovery Manager for rapid, reliable, and affordable automated site-disaster recovery.Enhanced application protection for virtualized applications through integration with SnapMirror softwaremeans that customers can achieve high levels of availability through instantaneous recovery and access ofdata through failed-over virtual machines (VMs) on the secondary site. Together, these products providecustomers with a robust disaster recovery solution that reduces the risk, cost, and complexity that is associatedwith traditional disaster-recovery approaches.
From an efficiency perspective, you know that SnapMirror software provides thin replication by leveragingthe many storage-efficient technologies that NetApp has had for many years, including Snapshot copies,RAID-DP technology, and deduplication. SnapMirror software has introduced a built-in network-compressioncapability to help to reduce customers’ network bandwidth utilization. Data transfers are accelerated to freethe network for other uses. And because customers can replicate more often, that means a lower RPO and atno additional cost, which means no additional hardware costs, no additional license costs, and no extradevices to manage. In lab testing, NetApp has seen bandwidth utilization reduced by 72% for Oracle data, a63% reduction for home directory, and 53% for Exchange. One customer, North American Banking
Company, uses SnapMirror compression and has seen bandwidth utilization increase by 66%, which saves thecompany an estimated $10,000.
Finally, customers can virtually partition storage and provide secure multi-tenancy with the ability to replicatedata across partitions with the knowledge that the data is protected.
DCI, a financial-services company, says, “NetApp software takes care of automating replication and recovery processes, and VMware SRM automates the failover. Should we ever experience a site disaster, in a matter ofminutes we can be up and running at the DR facility. And it costs us about 50% less than before.”
Disaster Recovery for
Virtual Environments
11
Broad support to meet needs:
– VMware
– Microsoft Hyper-V
– Citrix XenServer
Integrated with VMware SRM:enables automated virtual
machine (VM) failover
Leverages storage efficiency:
– Up to 90% less primary and
disaster-recovery storage
– Up to 70% less network
utilization
Designed for shared
architectures: secure multi-
tenancy across virtual storage
partitions
Primary Data Center Disaster-Recover Site
VM1 VM2 VM3 VM1 VM2 VM3
Virtual
Storage
Partition
Data
Data
Data
Virtual
Storage
Partition
Data
Data
Data
SnapMirror
Software
SiteFailure
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
NetApp has disaster-recovery relationships that go back and forth between all five of the NetApp sites aroundthe world. The product that makes that possible is SnapMirror software.
SnapMirror technology replicates a file system on one controller to a read-only copy on another controller.The replication can be volume-based or qtree-based, depending on the circumstances of the transfer. LikeSnapVault software, SnapMirror software is based on Snapshot technology, so only the changed blocks must be moved after the initial baseline is in place. SnapMirror software can be asynchronous or synchronous in itstransfer type and can run over IP or FC.
Customers can have one source that goes to many destinations or have many sources that go to onedestination. SnapMirror technology can cascade and be utilized in multihop scenarios. Probably the mostimportant difference is the resynchronization process. If you move production to the destination and makechanges there, you must be able to get those changes back to the original source. That is easy to do withSnapMirror technology: Like SnapVault software, SnapMirror software is easy to schedule and throttle.
SnapMirror software was the first replication product from NetApp and came out in 1997. SnapVaultsoftware was then based on the SnapMirror technology, which utilizes the same underlying engine.
SnapMirror Overview
SnapMirror software replicates a file system on one controller to a
read-only copy on another controller:
Replication is volume-based (traditional or flexible) or
qtree-based. Based on Snapshot technology, only changed blocks are copied
after the initial mirror is established.
Asynchronous and synchronous operations are possible.
SnapMirror software runs over IP and FC.
Data is read-accessible at remote sites.
“One to many” means multiple copies.
“Many to one” means consolidation.
Cascade and Multihop follow on destinations.
Resynchronization is easy. Scheduling and throttling is easy.
13NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
VOLUME SNAPMIRROR SOFTWARE VERSUS QTREE SNAPMIRROR SOFTWARE
SnapMirror technology can be configured for whole volumes or individual qtrees in a volume. VolumeSnapMirror technology replicates an entire volume and all the associated Snapshot copies to the secondary,including the volume’s qtrees. The replicated volume looks identical to the source volume, including theSnapshot copies. Volume SnapMirror technology can be used only on volumes of the same type — bothtraditional or both flexible volumes. Volume SnapMirror technology is a block-based replication. Therefore,earlier versions of Data ONTAP architecture cannot understand file-system transfers from later versions.
Qtree SnapMirror technology is used between qtrees, regardless of the type of the volume (traditional orflexible). Qtrees from different sources can be replicated to a destination, and the Snapshot copy schedules onthe source and destination are independent of each other. Qtree SnapMirror replication is logical replication:All the files and directories are created in the destination file system. Therefore, replication can occur betweendifferent versions of Data ONTAP software. Qtree SnapMirror technology can operate only in asynchronousmode.
Volume SnapMirror replication cannot occur from later to earlier versions of Data ONTAP software;however, the reverse is possible. If Volume SnapMirror technology is configured to replicate from an earlierto a later version, customers should upgrade the earlier version of the source as soon as possible. This allowscustomers to resynchronize (reversing the replication relationship) during a disaster-recovery scenario. This isalso true for synchronous SnapMirror technology; however, qtree SnapMirror technology does not have this
restriction.
Volume SnapMirror Software
Versus Qtree SnapMirror Software
Volume SnapMirror software:
– Replication of the entire volume:
Snapshot copies and qtrees replicate.
Volumes must be the same type (traditional or flexible).
– Block-based replication
Qtree SnapMirror software:
– Replicates only the qtree
– Can consolidate qtrees from multiple systems
– Provides logical, file-based replication
– Has no volume type or Data ONTAP version
requirements
– Is asynchronous only
14NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
SnapMirror software can be configured into three replication modes. All are available with a single license.
The first mode is synchronous SnapMirror. In this solution, the data at the disaster-recovery site exactlymatches the data at the primary site. This is achieved by replicating every data write to the remote locationand not acknowledging to the host that the write has occurred until the remote systems confirm that the datahas been written. This solution provides the least data loss, but a limit of 50 to 100 km exists before latency becomes too great, because the host application must wait for an acknowledgment from the remote NetAppdevices.
Semi-synchronous SnapMirror allows customers to achieve a near-zero-data-loss disaster recovery solutionwithout performance impact on the host application. The solution also allows customers to performsynchronous-type replication over longer distances. When data is written to the primary storage, anacknowledgment is immediately sent back, which eliminates the latency impact on the host. In the background, SnapMirror software tries to maintain as close to synchronous communication as possible withthe remote system. SnapMirror software has user-defined thresholds that control how far out of synchronicitythe source and remote copy datasets are allowed to get.
Asynchronous SnapMirror allows customers to replicate data at adjustable frequencies. Customers can do thistype of point-in-time replication as frequently as once per minute or as infrequently as once in several days. No distance limitation exists, and the mode is frequently used to replicate across long distances to protect
against regional disasters. Only the blocks that change between each replication are sent, which minimizesnetwork usage.
SnapMirror Modes
15
No data-loss exposure
A replication distance of
less than 100 km
Some performance impact
Seconds of data exposure
No performance impact
From one minute to hours
of data exposure
No distance limit
No performance impact
Synchronous SnapMirror
34
2Every Write1
1
42
3Every Write
Semi-Synchronous SnapMirror
1
2
3
1
2
Changed Blocks
Set Intervals
Asynchronous SnapMirror
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Companies assume that they need synchronous mirroring to have the best protection. The key question is:
What is the recovery point? The customer must take a realistic view of the company’s needs and consider theimplications. Synchronous replicationis not always the best choice for the situation.
Customers should consider these points when they decide on a level of synchronization:
Operation caching: If a line goes down, what happens? How should the recovery occur?
Distance limitations Latency limitations
The performance impact of a down communication line or system failover
Most NetApp customers choose asynchronous mirroring for the following reasons:
A Snapshot copy is created every minute.
Asynchronous mirroring guarantees a consistent file system, whether it is SAN or network-attachedstorage (NAS), every minute. (Guaranteed consistency is more valuable to most NetApp customers thanhaving all of the limitations that come with it.)
Synchronous, Asynchronous,
or Semi-Synchronous?
People assume that they need synchronous
mirroring.
Synchronous SnapMirror issues include:
– Cache
– The communication line going down
– Distance limitations
– Latency limitations
– Performance impact
Most customers go with Asynchronous SnapMirror:
– A Snapshot copy every minute
– A guaranteed consistent file system every minute
16NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
If there are problems with the network, synchronous replication might go into an asynchronous mode.Ordinarily, the source and destination controllers periodically communicate with each other to maintain theconnection. In the event of a network outage, synchronous SnapMirror goes into an asynchronous mode if the periodic communication is disrupted. When in asynchronous mode, the source controller tries to communicatewith the destination controller once every minute until communication is reestablished. Once communicationis reestablished, the source controller asynchronously replicates data to the destination every minute until
synchronous replication can be reestablished.
SnapMirror Sync Error Handling
Automated fallback to async mode when
connection is disrupted
Attempts to reestablish sync mode at one-minute intervals
Automatic reestablishment of sync operations
as soon as is possible
17NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Multiple hops can be used to protect against site disasters (with a synchronous replication solution) andregional disasters (with an asynchronous replication solution). SnapMirror technology can also replicate frommultiple data centers to a central disaster-recovery site, where you can centralize your tape backupinfrastructure, which reduces your costs.
AsyncSync
Cascading Many-to-One
V-SeriesFAS
Enterprise Storage Array
FAS FAS w/NPL
Multiple hops
Asymmetric replication
Heterogeneousreplication with V-Series systems
SnapMirror Flexibility
19NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
When customers buy SnapMirror technology, they get everything, but two license numbers exist:
One for semi-synchronous SnapMirror Another for asynchronous SnapMirror
A user can change the relationship between synchronous, semi-synchronous, and asynchronous modes. Therelationship can be set up in any way as long as the baseline is established. The modes can be changed
without performance impact or baseline resynchronization.
No separate source or destination license exists. Because only one license exists for both source anddestination, the same box can be a destination and a source.
SNAPMIRROR SOFTWARE AND SNAPVAULT SOFTWARE: PRODUCT COMPARISON
The differences between SnapMirror technology and SnapVault software may be confusing at first. Here is asummary.
SnapMirror software is set to run every minute, while SnapVault software is normally scheduled no morethan once every hour. SnapMirror software performs no Snapshot copy coalescing or management, whileSnapVault software performs both.
In a SnapMirror relationship, the Snapshot copies are the same on the destination as on the source. WithSnapVault software, a different schedule is used, which is synchronized with the backup scenario. The blocksthat are stored on the destination may be different from those on the source; only those blocks that arenecessary to maintain the Snapshot copies are stored on the destination. SnapVault software manages blocksdifferently on the destination than how SnapVault software manages what is visible on the source.
With SnapMirror technology, transfers can go two ways. With SnapVault software, transfers are one-wayonly. With SnapVault software, users do not ever intend for the destination to become production, so users donot need to synchronize data in the other direction, although users can restore data; whereas with SnapMirrorsoftware, not only can relationships go both directions between machines but those relationships can be easilyreversed.
SnapMirror software can be used to mirror volumes or qtrees. SnapVault software backs up qtrees only.
The destination can easily be made read-write in a SnapMirror relationship. With SnapVault software, thedestination is always read-only and can be used to back up open systems with Open System SnapVault.
SnapMirror Software and SnapVault
Software: Product Comparison
21
*Coalescing reduces thenumber of overhead Snapshot copies that are needed on the secondary system, which allows
customers to keep more backup copy online.
SnapMirror Software SnapVault Software
Can be scheduled to run every minute Can be scheduled to back up every hour
Provides no Snapshot coalescing Provides Snapshot coalescing*
Provides no Snapshot copy management Provides additional Snapshot copy management
Can transfer two ways Can transfer one way only
Mirrors volumes or qtrees Backs up qtrees
Can use a read-write destination Always uses a read-only destination
Does not support open systems Can back up open systems
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
The primary goal of MetroCluster is to provide mission-critical applications with redundant storage servicesin the event of site-specific disasters such as fire or long-term power loss.
MetroCluster can also be described as follows. MetroCluster is designed to tolerate site-specific disasters withminimal interruption to mission-critical applications and zero data loss by synchronously mirroring data
between two sites.You should adjust the focus depending on whom you are talking to. Some NetApp clients focus on theredundancy of data; others focus on the recoverability of the system.
MetroCluster OverviewDesign Goals
The primary goal of a MetroCluster is to provide
mission-critical applications and redundant
storage services in the case of site-specific
disasters (for example, fire or long-term power
loss).
MetroCluster tolerates site-specific disasters
with minimal interruption to mission-critical
applications and zero data loss by
synchronously mirroring data between two sites.
23NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Failures can be a result of acts of nature or something going wrong in the system.
An act of nature obviously is the worse scenario of the two, both in human terms and physical terms, but it isalso worse because you cannot tell the difference between the sudden destruction of a site and a networkoutage between the two sites. So, after this type of disaster, the system will not fail over automatically. If allcommunications are suddenly lost, an automatic failover is not performed. This contrasts with a standard side- by-side cluster, in which case the system would fail over.
If there is something going on in a system, such as an internal failure, the system knows it is going down, andit will send a signal across the line so the other system knows to “take over,” causing an automatic failover tooccur.
In a natural disaster, an administrator must declare that a disaster has happened and tell the other system to dothe takeover, so the system avoids a split-brain scenario and data corruption. You want to avoid split brain inany clustered environment. Some customers have automated this process. They have decided that if threeindependent network connections fail simultaneously, they assume it is a real disaster and have a script that
sends the takeover command. But many customers leave the disaster failover process as a manual process.
Two Types of Failure Scenarios
24
Disasters require an operator to confirm the disaster and manually run
the cf takeover–f command before a cluster failover can occur
Disasters…
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
MetroCluster is a way to stretch a cluster beyond the 500 meter distance limitation. This is very valuable forsites that need a cluster on a campus or
metropolitan area to allow for some localized failures as well as run as a cluster with failover integration.
This is very popular in industries and countries where a metropolitan separation is mandated for disasterrecovery.
A MetroCluster configuration comprises the following components and requires the following licenses:
An HA pair Provides automatic failover capability(cf license) between sites in the case of hardware failures
SyncMirror software Provides an up-to-date copy of data at the remote site; data is ready for access after(syncmirror_local) failover without administrator intervention
Controller failover Provides a mechanism for the administrator to declare remote site disaster and initiate(cf_remote) a site failover through a single command for ease of use
FC switch Provides controller connectivity between sites that (vendor-specific) are greater than500* meters apart; enables sites to be located at a safe distance away from each other
MetroCluster
26
MetroCluster is a cost-effective replication solution for
combined high-availability and SyncMirror disaster
recovery within a campus or metro area
LAN/SAN
Major Data Center Nearby Office
FAS or
V-Series
Disks
Stretch MetroCluster
provides campus disaster
recovery protection
– Can stretch up to 500m
Fabric MetroCluster
provides metropolitan
disaster recovery
protection
– Can stretch up to 100km
with FC switches V-Series MetroCluster
Configurations
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
MetroCluster can address a customer’s continuous-availability requirements whether MetroCluster isdeployed inside a data center, at multiple locations in a building, or across city or metropolitan-widedeployments up to a distance of 100 km. This enables a level of availability that goes beyond the HA featuresin a single array, which makes MetroCluster a highly versatile solution.
Two versions of MetroCluster exist: fabric and stretch.
Stretch is for short distances of up to 500m and with a direct FC connection between the systems.
Fabric is the long-distance version, for up to 30 km out of the box or up to 100 km with a policy-variancerequest ( PVR).
Functionally, the switched MetroCluster environment is identical to the nonswitched environment. The majorexception is the distance that can be achieved with the switched back end.
Here is an example of the long-distance version. The cluster interconnect, the NVRAM mirroring, theheartbeat, and the disk mirroring go over dark fiber. As with standard clusters, things are in production,volume X is mirrored over to X prime, and volume Y is in production on the other side that is mirrored overto Y prime.
The mirroring of data can go both directions and frequently is performed both ways. Brocade switches areused to achieve the distance, and the switch must be a Brocade switch. The Brocade switch is a specific set ofswitches that NetApp sells with the solution.
Fabric MetroCluster Metropolitan Area Distances
32
The switched MetroCluster deployment
uses high-powered, longwave small form
factors ( SFPs) in Brocade to achieve
distance.
FC Switches
Dark fiber
A-loop
B-loop
HA interconnect (FC-VI)
100 Km with Policy-Variance Request (
PVR)
Building A Building B
Vol X
Vol Y’
Vol Y
Vol X’
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Because all connections have been moved onto an FC-switched environment, the heartbeat for an HA pair, afabric MetroCluster requires an FC-VI card in the FAS6000 and FAS3000 series.
Fabric MetroCluster Connectivity (1 of 2)
Cluster interconnect: VI over FC (versus SCSI):
FC-VI (HA Interconnect) card required
FC switches:
– Disk and controller interconnect
– Brocade switches (see NetApp documentation for current
models):
licensed for full fabric (multiswitch fabric)
– No support for customer-supplied switches
Configuring for long distances:
– Up to 10 km: four longwave SFPs
– Greater than 10 km:
Four extended longwave SFPs (Brocade-certified)
Required extended distance license: buffer credits set accordingly
33NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
In many customer solutions, NetApp uses Brocade 200E switches for physical connectivity. TheMetroCluster fabric operates well in switched environments.
Switches are prewired, preconfigured, internal components of MetroCluster. As you do not have a choice ofdisks, likewise you do not have a choice of switches to use.
Only one Inter-Switch Link ( ISL) connection exists between each of the switches. Any switch port can beused.
Trunking is not supported.
This uses a VI interconnect (X1922A) card.
Interconnect is VI over FC (versus SCSI).
The card is a different version of the standard Qlogic QLA2352, currently a 2-Gb card.
Fabric MetroCluster Connectivity (2 of 2)
34
Storage ports:
– Can be 2 Gbps: two dual-port FC HBAs or four onboard ports
– Can be 4 Gbps:
Four onboard ports (model-specific)
Quad-port FC HBAs
Disk shelves:
– Shelves on each loop must be the same speed.
– Two shelves per loop is the maximum.
– ATA shelves are not supported.
– Depending on the FAS system, ownership is determined by software
or hardware rules.
– Disk shelves are attached to the same ports on both switches
(hardware ownership).
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Initially, Fabric-attached MetroCluster supported only Fibre Channel (FC) arbitrated loop shelves. NetAppintroduces, with Data ONTAP 8.1 7-Mode, support for a FC-to-Serial-attached SCSI (or SAS) bridge whichmaintains the distance benefits of FC while leveraging newer SAS disk and shelf technology.
The FC-to-SAS bridge is the ATTO FibreBridge 6500N. This bridge has the following features:
A Fibre Channel (FC) to Serial Attached SCSI (SAS) bridge from ATTO Technology to support the SAS
disk shelves - DS4243 and DS2246 in Stretch and Fabric MetroCluster configurations 2 8Gb Fibre Channel SFP+ ports
2 x4 6Gb SAS QSFP+ ports (only SAS port 'A' is used, port 'B' is disabled and not usable)
2 Ethernet ports One serial port
Standard 1U 19” rack mount form factor Management capable through Ethernet (recommended) or RS-232 Single integrated power supply (AC 100-240V)
Always check the NetApp Interoperability Matrix.
MetroCluster: SAS Support
35
FMC1-1 FMC1-2
S4
ATTO
FibreBridge6500N
S2 S3S1
Fabric-
attached
shown,
stretch also
supported
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
Prior to Data ONTAP 8.1, a single fabric MetroCluster (FMC) uses four dedicated switches which carries HAinterconnect and storage traffic. This means if the storage administrator needs two fabric MetroCluster setupsusing eight switches. These switches might be under-utilized in some environment. In cases where the storageadministrator feels that the existing switches and ISL are carrying less than 50 percent of their maximumcapacity, the storage administrator may opt for a shared fabric configuration. In this configuration, two fabricMetroCluster setups use just four switches.
The example on the slide illustrates a simple shared fabric MetroCluster scenario. The connections describednot the only method to connect these storage systems, disks and switches. This should be only as an exampleto give a better clarity to the solution. In the above setup, FMC1 and FMC2 form two fabric MetroCluster pairs that share the switches and the ISLs between the switches. The switches are named S1, S2, S3 and S4with domain IDs 1, 2, 3 and 4 respectively. For simplicity reasons, let us assume each storage controller has 2FCVI and 2 HBA ports. One of each is connected to the primary and secondary switches. FMC1 storagecontrollers connect the FCVI and HBA to switches via port 0 and port 2 respectively. FMC2 storagecontrollers use port 1 for FCVI and 3 for HBA. The disk shelves are connected to the switch through ports 4,5, 6 and 7. In addition to these we have 2 ISL on port 17 and 18 on all the switches. So in summary, thisconfiguration has F-ports on 0, 1, 2 and 3, E-ports on 17 and 18, and L ports on 4, 5, 6 and 7.
Shared Fabric
36
FMC1-1 FMC1-2
FMC2-1 FMC2-2
S3
310 2
467 5
17
18
S4
310 2
467 517
18
S1
310 2
467 5
17
18
S2
310 2
467 517
18
NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop
HA provides fault tolerance and the ability to perform nondisruptive upgrades and maintenance.
Configuring storage systems in an HA pair provides the following benefits:
Fault toleranceWhen one node fails or becomes impaired a takeover occurs, and the partner node continues to serve the
failed node’s data.
Nondisruptive software upgradesWhen you halt one node and allow takeover, the partner node continues to serve data for the halted nodewhile you upgrade the node you halted.
Nondisruptive hardware maintenanceWhen you halt one node and allow takeover, the partner node continues to serve data for the halted nodewhile you replace or repair hardware in the node you halted.
High AvailabilitySummary
Review the Data ONTAP 8.1 7-Mode High-
Availability Configuration Guide for information
about :
Fault tolerance
Nondisruptive software upgrades
Nondisruptive hardware maintenance
Specifications and comparisons
41NetApp Confidential
8/18/2019 Student Guide - NetApp Accredited Storage Architecture Professional Workshop