Top Banner
Data ontap evolution In 1992, NetApp introduced Data ONTAP and ushered in the network- attached storage industry. Since then, NetApp has continued to add features and solutions to its product portfolio to meet the needs of its customers. In 2004, NetApp acquired Spinnaker Networks to fold its scalable clustered file system technology into Data ONTAP. That plan came to fruition in 2006 as NetApp released Data ONTAP GX, the first clustered product from NetApp. NetApp also continued to enhance and sell Data ONTAP 7-Mode. Having two products provided a way to meet the needs of the NetApp customers who were happy with the classic Data ONTAP, while allowing customers with certain application requirements to use Data ONTAP GX to achieve even higher levels of performance and benefit from the flexibility and transparency afforded by its scale-out architecture. Although the goal was always to merge the two products into one, the migration path that would eventually enable Data ONTAP 7-Mode customers to get to clustered storage required a big leap. Enter the Data ONTAP 8 operating system. The goal of the Data ONTAP 8 operating system is to provide a single product that enables Data ONTAP 7G customers to operate a Data ONTAP 7-Mode system in a manner to which they're accustomed while providing a first step in the eventual move to a clustered environment. Clustered Data ONTAP enables Data ONTAP GX customers to upgrade and to continue to operate their clusters as they're accustomed. With the release of clustered Data ONTAP 8.1 in 2011, NetApp achieved one more major milestone in cluster storage: unified storage support. Clustered Data ONTAP supports NAS and SAN storage. Similarities with data ontap 7-mode
45

Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Jan 30, 2018

Download

Documents

duongkhanh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Data ontap evolution

In 1992, NetApp introduced Data ONTAP and ushered in the network-attached storage industry. Since then, NetApp has continued to add features and solutions to its product portfolio to meet the needs of its customers. In 2004, NetApp acquired Spinnaker Networks to fold its scalable clustered file system technology into Data ONTAP. That plan came to fruition in 2006 as NetApp released Data ONTAP GX, the first clustered product from NetApp. NetApp also continued to enhance and sell Data ONTAP 7-Mode. Having two products provided a way to meet the needs of the NetApp customers who were happy with the classic Data ONTAP, while allowing customers with certain application requirements to use Data ONTAP GX to achieve even higher levels of performance and benefit from the flexibility and transparency afforded by its scale-out architecture. Although the goal was always to merge the two products into one, the migration path that would eventually enable Data ONTAP 7-Mode customers to get to clustered storage required a big leap. Enter the Data ONTAP 8 operating system. The goal of the Data ONTAP 8 operating system is to provide a single product that enables Data ONTAP 7G customers to operate a Data ONTAP 7-Mode system in a manner to which they're accustomed while providing a first step in the eventual move to a clustered environment. Clustered Data ONTAP enables Data ONTAP GX customers to upgrade and to continue to operate their clusters as they're accustomed. With the release of clustered Data ONTAP 8.1 in 2011, NetApp achieved one more major milestone in cluster storage: unified storage support. Clustered Data ONTAP supports NAS and SAN storage.

Similarities with data ontap 7-mode

Page 2:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Scalability

Clustered Data ONTAP is different from "non-clustered" Data ONTAP in many ways. The most obvious difference is that a single file system can scale beyond one storage appliance, being distributed across a cluster of appliances. Each appliance is now called a "node," and many nodes can be put together to form a cluster.

This scalability is important for increasing storage capacity. It is also essential for providing the increased horsepower needed to handle an equally scalable workload, such as compute "farms" used for high-performance computing applications. In a NAS environment, Data ONTAP cluster size ranges from two to 24 nodes. In a SAN environment, currently the cluster size ranges from two to eight nodes. SAN is supported only from clustered Data ONTAP 8.1 and later.

Aggregates, volumes and namespaces

Clustered Data ONTAP continues to use aggregates and flexible volumes as its storage containers. Unlike "non-clustered" Data ONTAP, however, the volumes on a single storage appliance are no longer standalone file systems. Rather, they are pieces of a potentially very large file system, or namespace. This joining together of volumes to create large, distributed namespaces that appear as a single storage system is unique to clustered Data ONTAP.

Volume movement

Another key benefit of clustered Data ONTAP is the ability to move volumes around within a cluster, keeping the movement completely transparent to the namespace and the clients that use the namespace.

As workloads need to be balanced to improve overall performance, and as storage allocation requirements change over time, volumes can be moved to accommodate those needs.

Data ontap 7 mode components

To understand the software that makes all of this possible, consider Data ONTAP 7- Mode first. Data ONTAP 7-Mode is made up of five primary components: network, protocol, WAFL, RAID, and storage. Clustered Data ONTAP has these same components, as well as some new ones, that enable Data ONTAP to scale beyond a single system.

Page 3:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

- Networko The network interface module receives data from the client and delivers it to the

physical RAM- Protocols

o The protocol module determines the protocol used to transfer the data, such as CIFS and NFS in a NAS environment, then sends the pertinent information on to WAFL

- Waflo The WAFL (write anywhere file layout) module receives the raw data and places a

copy into NVRAM (non volatile random access memory)- Raid

o WAFL sends the data to RAID (redundant array of independent disk), which calculates parity to protect the data.

- Storageo Raid sends the data and parity information to the storage module. The storage

module physically performs the write to disk.

Clustered data ontap components

Clustered Data ONTAP components are grouped into two logical layers: access protocols and storage. Clustered Data ONTAP combines the network and protocol functionality into the access protocols layer. Likewise, WAFL, RAID, and storage are grouped into the storage layer.

Clustering

The next step is to see how two nodes are made into a cluster. When two or more nodes are clustered through a dedicated IP network, remote protocol access requests go directly from the network access protocol layer of one node to the storage layer of the node that owns the remote data. This decoupling of the network and protocol functionality from the actual data storage allows an entire cluster of nodes to work together to make a distributed group of nodes look and act as one single storage system.

Scaling a cluster

Page 4:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

As the cluster grows to contain more and more nodes, the cluster's combined processing power and capacity scale along with its increasing size, while maintaining its singlesystem image to its client machines.

Disk types

A disk is the basic unit of storage for storage systems running Data ONTAP. Understanding how Data ONTAP uses and classifies disks will help you manage your storage more effectively. Data ONTAP supports five disk types: Fibre Channel (abbreviated as FC), ATA; SATA, SAS, and solid-state drive (SSD). For a specific configuration, the disk types that are supported depend on the storage system model, the disk shelf type, and the I/O modules installed in the system. Administrators can always look up specific information when they need it on the NetApp Support site. Data ONTAP supports two disk connection architectures: FC-AL and SAS. FC and ATA disks use the FC-AL disk connection architecture. SAS, SATA, and SSDs use SAS diskconnection architecture.

Page 5:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Disk names: HBA port ID

Names for direct-attached disks (that is, disks that are attached directly to the storage system rather than through a switch) are based on the host adapter or port number, followed by the disk ID, in the following format: portID.diskID portID. IDs in this format refer to the host adapter or port number as follows:

Onboard expansion slots use the port number that is cabled to the disk shelf that contains the disk. Host adapters use the expansion slot number on the storage system where the host adapter is attached. The port number of the host adapter is appended to the expansion slot number.

Disk names: Disk ID

This example of disk numbering uses a DS14 FC disk shelf. The disk ID is an integer from 16 to 125. The disk ID corresponds to the disk shelf number and the bay in which the disk is installed. The lowest disk ID is always in the far right bay of the first disk shelf. The next higher disk ID is in the next bay to the left, and so on. The table shows the numbering system for FC loop IDs. The IDs that are reserved and are not used are displayed here. The table can be summarized by the following formula: DS14 Disk/Loop ID = DS14 Shelf ID * 16 + Bay Number.

Two examples of names of direct attached disks are:

- A disk with an id of 33, which is connected by way of fc-al to the b port of an adapter installed in expansion slot 4 would have the following name: 4b.33

- A disk with an id of 22, which is connected by way of fc-a to onboard port 0a, would have the following name: 0a.22

Disk ownership

Disk ownership determines which node owns a disk and which pool a disk is associated with. Understanding disk ownership enables you to maximize storage redundancy and manage your hot spares effectively. Disk ownership, which is software-based, is stored on the disk. Ownership is not determined by the topology of the storage system's physical connections as it might have been in previous versions of Data ONTAP. Software-based disk ownership gives you greater flexibility and control over disk use.

Page 6:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Raid disk types

Data ONTAP classifies disks as one of four types for RAID: data, hot spare, parity, or double-parity. The RAID disk type is determined by how RAID is using a disk. Later, you will use this information to make decisions about increasing the size of an aggregate or about how many disks to use when creating an aggregate. Point to each disk type to view its description. Data disk: A data disk is part of a RAID group and stores data on behalf of the client. Hot spare disk: A hot spare disk does not hold usable data, but it is available to be added to a RAID group in an aggregate. Any functioning disk that is not assigned to an aggregate, but is assigned to a system, functions as a hot spare disk. Parity disk: A parity disk stores data reconstruction within a RAID group. Double-Parity disk: If NetApp RAID-DP software is enabled, a double-parity disk stores double-parity information within RAID groups.

- Data disko A data disk is part of a raid group and stores data on behalf of the client

- Hot spare disko Hot spare disk does not hold usable data but is available to be added to a raid group

in an aggregate, but is assigned to a system, functions as a hot spare disk- Parity disk

o A parity disk stores data reconstruction within a raid group- Double-parity disk

o If Netapp raid-dp software is enabled, a double-parity disk stores double-parity information within raid groups

Raid groups

All Data ONTAP disks are organized into RAID groups. RAID groups provide parity protection against data loss. Each RAID group consists of data disks, a parity disk (RAID 4), and a double-parity disk (RAID-DP). A double-parity RAID group must have at least three disks: one or more data disks, a parity disk, and a double-parity disk. You can add disks to your RAID groups to increase usable disk space; however, you cannot remove disks to reduce disk space. Note that any new RAID-DP aggregate created after upgrading to clustered Data ONTAP 8.2 must have at least five disks for increased reliability and data protection

Page 7:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Raid groups: disk failure

If a data disk failure occurs in a RAID group, Data ONTAP replaces the failed disk with a spare disk. Data ONTAP automatically uses parity data to reconstruct the failed disk's data on the replacement disk. Meanwhile, Data ONTAP continues to serve data to clients by reconstructing data from the parity disk while the new data disk is under construction. If a parity or double-parity disk failure occurs in a RAID group, Data ONTAP replaces the failed disk with a spare disk and reconstructs parity for the new disk.

Understanding aggregates

An aggregate is a logical container that encompasses the physical aspects of storage, such as disks and RAID groups. Aggregates provide the storage for volumes, and volumes provide support for the differing security, backup, performance, and datasharing needs of your users. Each aggregate has its own RAID configuration, RAID groups, and set of assigned disks. When you create an aggregate, Data ONTAP assigns the data disks and parity disks to RAID groups that are used to create the aggregate. You can increase the size of your aggregate, either by adding disks to an existing RAID group or by adding new RAID groups; however, you cannot remove a disk to reduce storage space. You can use the aggregate to hold one or more FlexVol volumes. FlexVol volumes are logical file systems that share the physical storage resources and RAID configuration of the containing aggregate.

Remote write process

The clustered Data ONTAP write process delivers high performance. When a client writes to a file, the client operating system sends the write request to the clustered Data ONTAP operating system. The write operation is processed by the cluster in this order: First, when a client sends a write request to Data ONTAP, the request goes to the network module. The network module checks the volume location database to find the aggregate that is hosting the volume and the node that is hosting the aggregate. The network module passes the data to the cluster session manager (abbreviated as CSM), which controls traffic through the cluster-interconnect. The CSM sends the data through the cluster-interconnect to the target node's CSM. The storage module, which includes the WAFL file system and the RAID layer, receives the raw data. The primary job of the WAFL file

Page 8:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

system is to determine how the data will be written when the write is performed. Meanwhile, copies of all write requests are stored in NVRAM as a backup that is used for emergencies. Because NVRAM is backed by a battery, the write request will survive, even if power is lost. The WAFL file system also mirrors NVRAM contents over the HA interconnect to the node's HA partner, in case of a storage failover event. After the data is written to NVRAM, the WAFL file system sends an acknowledgement back to the client that the write operation was successful. The WAFL file system continues to take incoming data and decides how to write data on disk until a consistency point (a CP) occurs. This typically happens either every 10 seconds or when NVRAM is half full. When a CP occurs, the WAFL file system locks the half of NVRAM that contains a backup of write requests that are in the process of being written. The other half of NVRAM is used for incoming requests. When the CP is finished, the locked portion of NVRAM is flushed and made ready for use. When the CP occurs, the WAFL file system passes data to the RAID module. The RAID module calculates parity and adds parity information to the data that will be sent to disk. RAID then notifies the storage module that the data is ready. The storage module physically performs the write request.

Module two

DATA ontap 7-mode terminology

The term "cluster" has historically been used in Data ONTAP 7-Mode to indicate what is now referred to as a high-availability, or HA, pair. That is, two storage systems connected together via an HA interconnect to enable the "take over" of storage and network interfaces of its partner when necessary. The storage and network interfaces owned by the system that is down are still served by the healthy partner system. This was called cluster failover (CFO).

Clustered data ontap 8 cluster

In clustered Data ONTAP, HA-pair relationships continue to exist; however, the term "cluster" means something very different. A Data ONTAP cluster is a physical interconnectivity of storage systems, which are called "nodes." A cluster can include from 2 to 24 nodes in a NAS environment and 2 to 8 nodes in a SAN environment. Nodes are connected to each other by a private, nonroutable 10-Gb Ethernet interconnect. Each node has an HA partner node for storage failover (abbreviated as SFO). Both nodes are also peer nodes within the cluster. The storage of a node can failover to its HA partner, and its logical interfaces can failover to any node within the cluster. Starting with clustered Data ONTAP 8.2, two new options are available for the configuration of small clusters. For the first time, clustered Data ONTAP supports single-node clusters and switchless two-node clusters. For more information on single-node clusters, two-node switchless clusters, and other advanced topics, take the Clustered Data ONTAP 8.2 Administration instructor-led course.

Page 9:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Cluster components

A cluster, which is a physical entity, is made up of other physical and logical pieces. For example, a cluster is made up of nodes, and each node is made up of a controller, disks, disk shelves, NVRAM, and so on. On the disks are RAID groups and aggregates. Also, each node has a certain number of physical network ports, each with its own MAC address.

Data virtual storage servers

Data virtual storage servers, or data Vservers, can use any or all of the physical cluster resources to provide a file system, or "namespace." Multiple data Vservers can exist in a single cluster, and each namespace is disjoint from the namespaces of any other data virtual servers in the cluster. The same cluster resources can be used simultaneously by multiple data Vservers, and at the same time. Data Vservers do not have node boundaries; they are bound only by the physical cluster on which they are built

Data virtual storage server components

Data Vservers have their own components, all of which are also virtual. Every volume is associated with exactly one data Vserver. All the Snapshot copies of that volume are also associated with that data Vserver, as are all the mirrors of that volume. In this example, each circle represents a single volume. The circles that are "stacked" behind a volume are the Snapshot copies of the volumes, and the circles labeled with prime symbols (') after the letters are mirrors of the volumes with the identical letters. Each data Vserver can also have its own client-facing logical interfaces, one NFS service, one CIFS service for NAS, and one FC service and iSCSI service for SAN.

Page 10:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Test

Page 11:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Module two

Cluster-wide management

Clustered Data ONTAP provides two interfaces for managing nodes and clusters: the browser-based NetApp OnCommand System Manager and the cluster shell commandline interface. Although you connect to an individual node of a cluster with these user interfaces, you can see and manage the entire cluster from any node. System Manager supports only clustered Data ONTAP 8.1 or later. For support of all new clustered Data ONTAP features, go to the NetApp Support site to find the most recent version of OnCommand System Manager.

Installing oncommand system manager

You can configure, manage, and monitor your cluster by using OnCommand System Manager. System Manager is a web-based graphical management interface for managing common storage-system functions from a web browser. System Manager enables you to configure and manage storage in a Windows or Linux environment. System Manager is free software that you can download directly from the NetApp Support site. Use the intuitive installer to install OnCommand System Manager in only a few clicks. To learn more about System Manager, review the web-based course Technical Overview of OnCommand System Manager.

Cluster shell

The cluster shell commands are organized into a command hierarchy that groups associated commands in branches of the hierarchy. These branches are made up of command directories and commands, similar to the directories and files within a file system. Each fully-qualified cluster shell command contains the complete "path" of the command.

The command hierarchy

Page 12:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

After you log in to the cluster shell of a node, you are placed at the top of the commandline hierarchy. At any location in the hierarchy, a question mark shows you the command directories and commands that are available at that location. Many of these are not commands at all, but rather command directories. Command directories have a greaterthan sign (>) immediately after them. This is a visual clue that there is "more," and that you can navigate further to get to more subdirectories or commands.

Command navigation

For example, the volume branch of the hierarchy contains everything pertaining to volumes, such as creating, deleting, modifying, and showing volumes. In addition to the commands for working directly with volumes, this directory also contains commands for working with the Snapshot copies of volumes. You can type two dots to move backward through the command hierarchy a directory at a time. Enter the top command to go straight to the top of the command hierarchy

Command usage

To see the syntax of a command, type the command followed by a question mark. The syntax indicates required parameters by showing them within single brackets. Optional parameters are enclosed within double brackets. The brackets themselves are not part of the commands. The order in which the parameters are shown is important if you want to use positional parameters rather than named parameters. For example, to create a volume using only positional parameters, you would type volume create <vserver<, without using the vserver, volume, and aggregate parameters.

Tab completion

In the cluster shell, tab completion works while you are typing part of a command directory or command by finishing what you are typing and by prompting you for what's next. Tab completion also works when you type the parameters of a particular command. Not only does it step you through the parameters that you need for a command, it can also help you fill in the values for those parameters.

Tab completion example

For example, you can enter the storage aggregate rename command by typing only a few keystrokes. If you type the letter "s" and then press the Tab key, command directories that begin with "s" are displayed. Type "st" to narrow the options down to two directories, and "sto" to narrow the options down to one directory. Then "sto" is autocompleted to "storage."

Tab completion in the command hierarchy

After you expand storage, press the Tab key again to display the command directories within that directory. The same tab completion functionality exists here, and you can use it to select the next command directory or command, also with minimal keystrokes. The tab completion functionality can also display the commands that are available for aggregates

Tab completion of parameters

After you choose a command, tab completion can step you through the parameters and values for the command. For example, because aggregate is the first parameter, if you press the Tab key, the aggregate parameter appears, followed by a list of aggregates in the cluster that are the possible values for this parameter. You can select one of these aggregates and then continue filling in the remaining parameters for this command manually or by using tab completion.

Page 13:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Command abbreviations

In the cluster shell, you can abbreviate any term (command directory, command, or parameter) by typing the fewest letters required to make the term unique within its context, as you say for the storage command. From the top level of the command hierarchy, the shortest unambiguous abbreviation for the "storage" command directory is "sto." When you type this abbreviation, no tab completion is needed.

Abbreviation strings

The aggregate command directory within the storage directory can be unambiguously abbreviated as "agg," and the rename command can be abbreviated as "ren." Therefore, the storage aggregate rename command can be abbreviated as "sto agg ren." You can follow these same rules to abbreviate the parameters. But even better than that, you can eliminate the named parameters altogether if you specify the parameter values in the exact canonical order shown in the command syntax.

Module three

Raid technology in clustered data ontap

Aggregates in clustered Data ONTAP are formed by using either RAID 4 or RAID 6 technology to group disks together. RAID 4 groups consist of data disks and one parity disk and can withstand a single disk failure. RAID 6 groups, on the other hand, are made up of data disks and two parity disks and can withstand two disk failures. RAID-DP technology is the NetApp implementation of RAID 6.

Aggregates

An aggregate is a logical container of one or more RAID groups. Each aggregate has its own RAID configuration, which may be different from other aggregates on the node and in the cluster. The size of the aggregate can be increased by adding disks to it, but it cannot be reduced. An aggregate is "owned" by a particular node in the cluster, based on the ownership of the disks that make up the RAID groups of the aggregate.

Flash pools

Traditional aggregates are built with matching disk types. Aggregates are constructed from RAID groups of SATA, FC, SAS, or SSD disks. Hard disk types cannot be mixed within an aggregate. Flash Pools introduce a high-speed SSD tier into a standard aggregate. The SSD tier acts as cache for data and metadata within the aggregate. The benefits of Flash Pools include improved cost and performance with fewer spindles, less rack space, and lower power and cooling requirements. Flash Pools provide highly available storage with a simple administrative model. The cost-to-performance and cost-to-capacity ratios are better than an SSD and SATA combination with pure FC or SAS solutions. Flash Pools ensure predictable and improved operation when running in degraded mode, during controller failures, and during HA takeover and giveback. They provide automatic, dynamic, and policy-based placement of data on appropriate storage tiers at WAFL block granularity for data and metadata

Page 14:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Flash pools: how to create a flash pool

You can enable Flash Pools on new or existing aggregates in three steps. No additional license is required to create a Flash Pool. First, select a new or existing aggregate. The selected aggregate must be a 64-bit aggregate. Then turn on the hybrid_enabled option on the aggregate. Finally, add a new SSD RAID group to the aggregate. These steps convert the aggregate to a Flash Pool and activate the storage tiers. The capacity of the SSD tier is not reflected in the total aggregate size. For example, if the original aggregate has a 10-TB capacity and you add an SSD raid group with a 1-TB capacity, the amount of capacity in the aggregate that can be provisioned is still 10 TB.

Page 15:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Module four

Volumes

An aggregate by itself isn't very useful, and multiple aggregates are equally boring. Things begin to get interesting when volumes are created on aggregates. A volume is a logical piece of an aggregate that contains files and directories arranged in a single file system "tree." Volumes can be resized dynamically, without the addition or removal of physical resources. Data ONTAP 7-Mode has two types of volumes: traditional and flexible. Clustered Data ONTAP has one type of volume: flexible. Flexible volumes are the same in both products

Volumes are the context within which data is managed for a cluster. For example, volumes are the units of data for which Snapshot copies, mirrors, and tape backups are created. Volumes can be moved around in a cluster, and those moves are transparent to any clients that use them.

Namespaces in clustered data ontap

One of the key features of clustered Data ONTAP is the ability to "join" volumes together to create distributed global namespaces. If a client machine mounts or maps the "root" volume of a namespace, it can navigate to every volume in that namespace, regardless of which nodes in the cluster are hosting the volumes.

Junctions

The magic that joins volumes together is called a junction. A junction is conceptually like a mount point of a Unix file system. Rather than connecting one Unix partition to another, a junction connects one volume to another volume. From a client machine, the junction simply looks like a directory. Every volume in a namespace can be connected to every other volume in that namespace. A junction can live at any directory or qtree level within the parent volume. A volume can be "unmounted" from its parent and "mounted" to a new parent at any time.

Creating a volume

You create a volume within a data Vserver by using either the cluster shell CLI or OnCommand System Manager. From the cluster shell, you create a volume by using the volume create command.

Page 16:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Volume attributes for the Vserver, volume name, and aggregate are required. For other attributes, you can use the default value unless otherwise specified. In most cases, you will specify at least a volume size and a junction path. In OnCommand System Manager, you navigate to the Vserver that will host the volume and click the Create button. From there, you specify the attributes of the volume. After a volume is created, the new volume is included in the list of volumes for your selected Vserver. To mount the volume into the Vserver's namespace, navigate to the namespace in the left pane of System Manager. When the namespace diagram appears, click the Mount button. In the Mount Volume dialog, you can specify the junction name and the path to the new junction.

Infinite volume definition

Clustered Data ONTAP 8.1.1 introduced infinite volumes. Infinite volumes are boundless, easily manageable, and scalable containers that exceed the current Data ONTAP limits for FlexVol capacity. Clustered Data ONTAP 8.2 improves infinite volumes in many ways. Infinite volumes now support NFSv3, CIFS, and NFSv4.1, including pNFS. Data ONTAP 8.2 also introduced data constituent groups, which can be used for storage-class tiers

Infinite volume features

Infinite volumes can now coexist with FlexVol volumes on aggregates and Vservers that are enabled for infinite volumes can coexist with Vservers that serve FlexVol volumes. Infinite volumes use a unified security style, which enables all clients to view and set file permissions, regardless of the file permissions that are currently in effect on a given file or directory. A unified security style also provides unified access control lists, which facilitate access checks using both Windows and UNIX credentials. Clustered Data ONTAP 8.2 allows you to perform disaster recovery of a namespace constituent that has been permanently lost, from namespace mirror constituent. Infinite volumes can have mirror relationships with infinite volumes in other clusters as well as fan-out and bidirectional mirror relationships. Infinite volumes are also now supported on all current NetApp platforms, except the FAS2000 series.

Page 17:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Not supported with infinite volume

Infinite volumes in clustered Data ONTAP 8.2 do not support some features, including single-node clusters, qtrees and quotas, FlexCache software, Cascading mirror copies, and SnapVault software

Infinite volumes: examples

Creating infinite volumes and the aggregates that host them is very similar to creating FlexVol volumes. You create aggregates throughout the cluster to host constituent volumes. Create a Vserver that is capable of serving infinite volumes by using the -isrepository switch. Then create the infinite volume to fit the capacity of the constituent aggregates. The volume show command shows you the infinite volume. The volume show command with the -is-constituent true switch displays the list of constituent volumes.

Module five

Nfsv4 and nfsv4.1

The clustered Data ONTAP 8.1 operating system introduced support for the NFS version 4 (NFSv4) protocol specification, as well as elements of NFSv4.1. Clustered Data ONTAP continues to fully

Page 18:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

support NFSv3. Starting with clustered Data ONTAP 8.2, NFSv2 is no longer supported. The key feature of NFSv4 is referrals, which are discussed in this module. NFSv4.1 is a minor revision and extension of version 4.0, not a modification, so it is fully compliant with the NFSv4 specification. It extends delegations beyond files to directories and symlinks, introduces NFS sessions for enhanced efficiency and reliability, provides parallel NFS (pNFS), which is also discussed in the module, and fixes some problems with NFSv4.

Nfsv4.1:pnfs examples

Remote file access is defined as file access in which a client that is connected through a logical interface (abbreviated as LIF) that goes through a physical port on one controller accesses a file that is hosted on a different controller in the same cluster. Remote file access has traditionally been a performance concern for clients that use clustered Data ONTAP. In this example, a client is mounted to a data LIF that is hosted on node 2. This client has a file operation with a destination in a volume on node 5. The request is serviced by the node 2 protocol stack. The node 2 protocol stack looks up the location of the volume and directs the operation to node 5, which hosts the target volume. The request traverses the cluster-interconnect, and the result is returned to the client along the same path. With pNFS, when a file is opened by an NFS client, the mounted data LIF on node 2 serves as the metadata path because this is the path that is used to discover the target volume's location. If the data is hosted by node 2, the operation is managed locally. But in this case, the local node discovers that the data is on node 5. Based on the pNFS protocol, the client is redirected to a LIF that is hosted on node 5. This request, as well as subsequent requests to the volume, are serviced locally, bypassing the cluster network. When a volume is moved to an aggregate on a different node, the pNFS client data path is redirected to a data LIF that is hosted on the destination node.

Nfsv4.1 pnfs features

You can enable pNFS from the command line with the vserver nfs modify command. Before enabling pNFS, ensure that all NFS clients are compatible with and configured to support NFSv4.1 and pNFS. Red Hat Enterprise Linux 6.2 and Fedora 14 are targeted for full support. Before you implement pNFS or NFSv4 referrals, you should decide which fits better on your environment because they do not work together. By keeping network access and data local, pNFS reduces the amount of traffic that traverses the cluster network. Unlike the NFS referral process, pNFS works seamlessly with the

Page 19:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

client; it does not require a file system remount to ensure an optimized path. With pNFS, because the network redirect does not happen at mount time, the file handle is not left stale when a volume is moved to an aggregate on a different node.

SMB 2.0 and SMB 2.1

In addition to the SMB 1.0 protocol, clustered Data ONTAP supports SMB 2.0 and SMB 2.1. SMB 2.0 is a major revision of the SMB 1.0 protocol, including a complete reworking of the packet format. SMB 2.0 also introduced several performance improvements relative to previous versions. Enhancements include more efficient network utilization, along with request compounding, which stacks multiple SMBs into a single network packet. Larger read and write sizes enable it to exploit faster networks, and file and directory property caching are provided. Durable file handles allow an SMB 2 connection to transparently reconnect to the server if a temporary disconnection occurs, such as over a wireless connection. In addition, improved message signing with improved configuration and interoperability with HMAC SHA-256 replace MD5 as a hashing algorithm. SMB 2.1 provides important performance enhancements, including the client opportunistic lock (oplock) leasing model, large maximum transmission unit support, and improved energy efficiency for client computers. Support is provided for previous versions of SMB. The SMB 2.1 protocol also provides several minor enhancements to the SMB 2.0 specification. The clustered Data ONTAP 8.1 operating system supports most of the SMB 2.1 features. Starting with clustered Data ONTAP 8.2, support is provided for resilient file handles and branch cache. Support for SMB 2.1 is automatically enabled when you enable the SMB 2.0 protocol on a virtual storage server (Vserver). You can use this command to enable SMB 2.0 for a Vserver

Page 20:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

SMB 2.1 leases

One of the most important features in the SMB 2.1 protocol is the oplock leasing model. Leasing allows a client to hold oplocks over a wider range of scenarios. The feature offers enhanced file caching and metadata caching opportunities for the SMB client, and provides major performance benefits by limiting the amount of data that needs to be transferred between the client computer and the server. This enhancement particularly benefits networks with high latency. Additionally, as the number of operations that must be directed toward an SMB file server is reduced, the SMB file server scalability is increased. The new leasing model in SMB 2.1 allows greater file and handle caching opportunities for an SMB 2.1 client computer, while preserving data integrity and requiring no current application changes to take advantage of this capability. New caching levels allow more flexibility for clients to request caching mechanism. Multiple applications can retain the cached data even after the file handle is closed. Full data caching is supported even with multiple handles, as long as those handles are opened on the same client.

Page 21:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Branch cache

BranchCache is a CIFS feature that caches content on computers local to requesting clients, either in distributed or hosted cache mode. In distributed cache mode, branch office client computers download content from content servers in the main office and then cache the content for other computers in the same branch office. Distributed cache mode does not require any infrastructure or services in the branch office beyond client computers that run Windows 7. In hosted cache mode, branch office client computers download content from the content servers in the main office, and a hosted cache server retrieves the content from the clients. The hosted cache server then caches the content for other client computers. The hosted cache server at the branch office must run Windows Server 2008 R2 or Windows Server 2012. Clustered Data ONTAP 8.2 and later versions support both BranchCache 1 and BranchCache 2. For more information about configuring and managing BranchCache, see the Clustered Data ONTAP 8.2 File Access and Protocols Management Guide, and the CIFS Administration on Data ONTAP instructor-led training for Data ONTAP 8.2.

SMB 3.0

Clustered Data ONTAP 8.2 introduces support for SMB 3.0 enhancements that provide BranchCache version 2, witness protocol, remote VSS for SMB shares, persistent file handles, ODX copy offload, and continuously available shares.

The Data ONTAP implementation of BranchCache reduces WAN utilization and provides improved access response time for branch office clients who are using the SMB protocol.

The Data ONTAP implementation of BranchCache reduces WAN utilization and provides improved access response time for branch office clients who are using the SMB protocol.

Microsoft Remote VSS extensions enable Remote VSS-enabled backup services, such as SnapManager for Hyper-V, to create application-consistent shadow copies for virtual machines that store data and configuration files on shares.

Persistent file handles allow an SMB connection to survive a brief network outage without the need to construct a new session.

Offloaded Data Transfer, or ODX, enables direct data transfers within or between compatible storage devices without transferring the data through the host computer.Clustered Data ONTAP also supports ODX for SAN protocols.

The continuously available share property enables clients to survive disruptive events such as failover and giveback.

Hyper –v over smb

SMB 3.0 enables you to use continuously available SMB file shares to store Hyper-V virtual machine files on volumes, providing nondisruptive operations for both planned and unplanned events. With Data ONTAP Hyper-V over SMB, clients that connect through continuously available SMB 3.0 shares can survive disruptive events such as takeover and giveback. The witness protocol provides enhanced client failover capabilities for SMB 3 shares. The witness protocol facilitates faster failover by bypassing the LIF failover recovery period. It notifies Hyper-V servers when a node is unavailable without needing to wait for the SMB connection to time out. It provides nondisruptive operations for events such as planned and unplanned takeover and giveback, relocation of aggregates, and upgrades.

Page 22:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Additional nas protocol enhancements

Clustered Data ONTAP 8.2 adds support for NAS protocol features like NAS auditing, local users and groups, offline files and roaming profiles. For more information about these features, see the Clustered Data ONTAP 8.2 File Access and Protocols Management Guide, and the CIFS and NFS Administration on Data ONTAP instructorled training for Data ONTAP 8.2.

Page 23:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Multiple networks needed

Clustered Data ONTAP uses three distinct physical networks. You have already experienced the management network, which is used when connecting to the UI of a particular node. You have also seen the use of the cluster-interconnect, which is the private network that provides for intracluster communication among all the nodes. Another network is the "data" network. This is the one through which NAS and SAN clients connect to the cluster. NetApp recommends that data and management interfaces reside on separate physical networks.

NAS: management and cluster lifs

Each node connects to each of these networks through a unique LIF and network port. The management, cluster, and data LIFs map IP addresses to network interface cards (abbreviated as NICs) on particular nodes. For example, each node has a NIC that is designated as a management port. A management LIF, then, is an association between an IP address on the management or data network and a management port on a node. Likewise, each node contains two cluster ports, and those ports are associated with IP addresses on the cluster-interconnect by means of cluster LIFs.

NAS: DATA LIFS

Data LIFs are also associations between IP addresses on the data network and network ports on nodes. They differ from the node management and cluster LIFs, however, in that node management and cluster LIFs are tied permanently to nodes, but data LIFs are tied to Vservers. Their association with a node is utilitarian; for example, they use data network ports to create connections to nodes, but those LIFs provide access to the protocol services of a Vserver, not of a node.

Cluster lif failover and migration

Each management LIF is bound to one network port of one node, but cluster LIFs can migrate among the cluster network ports within a node. If a node in a cluster has two or more cluster ports, and a problem is detected with a NIC, a network cable, or the cluster switch, the cluster LIF can

Page 24:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

transparently fail over to another cluster port on the same node. This is for high availability of the cluster itself.

Scalable san

Clustered Data ONTAP delivers a SAN target device that is scalable and provides location transparency and single-image, cluster-wide administration to the NetApp block access portfolio. The new SCSI target implements the SCSI-3 protocol engine for access to LUNs in clustered WAFL (Write Anywhere File Layout) FlexVol volumes. On the front end, clustered Data ONTAP works with the FC protocol and iSCSI transport modules to provide SCSI target services. On the back-end, it maps SCSI operations into data packets that can traverse the cluster network to be processed by the host storage subsystem, or for coordination among different SCSI target instances on each node. Scalable SAN introduces new data LIF types. FC LIFs are assigned to host bus adapters. Like traditional NAS data LIFs, iSCSI LIFs are assigned to Ethernet ports. Unlike NAS data LIFs, FC LIFs are assigned worldwide port names instead of IP addresses. iSCSI LIFs are assigned IP addresses but cannot be used by NAS clients; likewise, NAS LIFs cannot be used by iSCSI clients. NAS and iSCSI LIFs can co-exist on a single Ethernet port. In clustered Data ONTAP 8.2, SAN is scalable up to eight-node clusters.

You must configure SAN clients to use multipath I/O to access LUNs, and Asymmetric Logical Unit Access (abbreviated as ALUA) to determine the state of a given data path to the LUNs. The direct path to a LUN refers to the path for which the LIF and LUN are hosted by the same node. The indirect path represents the path for which the LIF and LUN are hosted on separate nodes. Unlike NAS LIFs, SAN LIFs do not migrate among interfaces or nodes. Therefore, the client host uses ALUA to determine the most efficient path (or paths) to communicate to the LUN. The direct path becomes the primary path for data transfer between the host and the LUN. When a volume that hosts a LUN is moved to an aggregate on a different node, the Vserver updates the path status, and the client polls the Vserver for the change. In this way, the new direct and indirect paths are chosen, and the client selects the best possible paths. When a node goes down and storage fails over to the partner node, the node's paths also go offline. If an appropriately zoned SAN LIF is available on the

Page 25:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

partner node, the path to the takeover node becomes the active path until the aggregate is returned to its home node. If the paths to a node become unavailable so that only indirect paths remain, but the storage doesn't fail over, the client chooses an indirect path, and the data traverses the cluster network until an optimized path is restored.

File access

The NFS, CIFS, FC, and iSCSI servers for clustered Data ONTAP are not configured per node, nor are they configured per cluster. Rather, they are configured per cluster per Vserver. For example, for NFS and CIFS, an NFS and a CIFS server run on each node, but they work together to appear as one NFS and one CIFS server for the entire cluster Vserver. The FC and iSCSI servers are also configured per cluster per Vserver. The most basic configuration for NFS, CIFS, FC, and iSCSI access for a Vserver is simple to perform; however, many other steps are required, depending on your needs and preferences, such as Network Information Service configuration and routing group configuration

Module six

Distributed namespaces

A namespace can and should be distributed throughout the nodes and aggregates of a cluster in order to spread the workload across the resources of the cluster. The most obvious and simple way of doing this is to add volumes to nodes and aggregates in a round robin fashion. As more and more volumes are created and added to a namespace, the distribution will tend to balance out.

Flexibility with volumes

When you first begin setting up a particular virtual server and its namespace, it might be difficult to predict how it will grow in its consumption of capacity, bandwidth, and CPU utilization. With the flexibility of NetApp clusters, you're never locked in to a particular configuration. As volumes grow and shrink, they might need to be moved to balance space utilization. Volumes might also need to be moved to alleviate "hot spots" that occur due to some volumes, aggregates, or nodes being utilized more than others. Volume movement is mostly transparent to any clients and applications that use the volume that is being moved. The exceptions to that rule are CIFS clients. Because NFS clients are "connectionless," whereas CIFS clients are not, CIFS clients might experience an interruption if they are in the midst of accessing a volume that is being moved.

Flexibility with logical interfaces

Likewise, network ports on the nodes can also become hot spots, as multiple logical interfaces that share the same node, or even the same port, begin handling more traffic. Logical interfaces can be easily migrated to a different port, node, or port and node to balance the network load within the cluster.

Load-sharing mirrors

In addition to mirroring data in order to protect it, clustered Data ONTAP provides mirroring for load balancing. "Copies" of read/write volumes, which are called loadsharing mirrors (or LS mirrors), can be used to offload read requests from their read/write volumes. Also, when a number of LS mirrors are created for a single read/write volume, the likelihood of a read request being served locally, rather than traversing the cluster network, is greatly increased, resulting in better read performance.

Load-sharing mirrors that are out of sync

Page 26:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

An LS mirror is "in" the namespace at the same point as its read/write volume. So, if a volume has any LS mirrors, all client requests are sent, transparently to the clients, to an N-Blade or SCSI-Blade-selected LS mirror, rather than to the read/write volume. If the LS mirrors become out-of-sync with their read/write volumes, a client read request will get out-of-date information

Volume move

T Clustered Data ONTAP enables you to move a volume from one aggregate or node to another within the same Vserver for capacity utilization, improved performance, and to satisfy SLAs. The volume move is a nondisruptive operation. During the volume movement process, the original volume is intact and available for clients to access. You can move a FlexVol volume to a different aggregate, node, or both within the same Vserver.

Typical nfs mount

The typical way of mounting the root volume of a namespace on an NFS client is by using the mount command keyword followed by the IP address of the data LIF, followed by a colon and the mount point. This mount accesses the read/write volume except unless the volume has an LS mirror. When a volume has an LS mirror and has replicated to it, all read and write access to that volume via that NFS mount is sent to one of the LS mirrors rather than to the read/write volume. For any volumes that do not have LS mirrors, their read/write volumes continue to be used.

Nfs mount for writes

To allow an NFS request to go to the read/write volume after it has been replicated to an LS mirror, an additional mount would need to be done to use the /.admin path. For CIFS clients, an additional step is needed within the cluster itself. You must create an additional CIFS share that uses /.admin rather than / for its path. The clients that require read/write access must use that share.

Ls mirror selection

When multiple LS mirrors exist for a volume, the node that receives the request gives preference to a local LS mirror. If there is no local LS mirror, Data ONTAP uses a roundrobin algorithm to choose which "remote" LS mirror receives the request. For volumes with high read traffic, a good practice is to have an LS mirror on every node so that all read requests are served locally. Mirroring of the root volumes of virtual servers is highly recommended and is considered a best practice.

Dns load balancing

Clustered Data ONTAP also includes additional load-balancing functionality in the form of DNS load balancing. The DNS load-balancing feature works in two ways. With the first method, initial client mount requests are returned the IP address of the least-utilized node and port combination. With the second method, automatic periodic rebalancing occurs for LIFs that host NFS connections. The LIFs are automatically migrated throughout the cluster to achieve evenness in terms of the number of connections, CPU utilization, and throughput. Ultimately, this feature helps to balance the overall utilization of the cluster. It does not increase the performance of any one individual node; rather, it ensures that each node is more evenly used. The result is better performance utilization from the entire cluster. DNS load balancing also improves the simplicity of maintaining the cluster. Instead of manually deciding which LIFs are used when mounting a particular global namespace, the administrator can let the system dynamically determine which LIF is the most appropriate.

Page 27:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Module 7: load balancing

Distributed namespaces

A namespace can and should be distributed throughout the nodes and aggregates of a cluster in order to spread the workload across the resources of the cluster. The most obvious and simple way of doing this is to add volumes to nodes and aggregates in a round robin fashion. As more and more volumes are created and added to a namespace, the distribution will tend to balance out.

Flexibility with volumes

When you first begin setting up a particular virtual server and its namespace, it might be difficult to predict how it will grow in its consumption of capacity, bandwidth, and CPU utilization. With the flexibility of NetApp clusters, you're never locked in to a particular configuration. As volumes grow and shrink, they might need to be moved to balance space utilization. Volumes might also need to be moved to alleviate "hot spots" that occur due to some volumes, aggregates, or nodes being utilized more than others. Volume movement is mostly transparent to any clients and applications that use the volume that is being moved. The exceptions to that rule are CIFS clients. Because NFS clients are "connectionless," whereas CIFS clients are not, CIFS clients might experience an interruption if they are in the midst of accessing a volume that is being moved.

Flexibility with logical interfaces

Likewise, network ports on the nodes can also become hot spots, as multiple logical interfaces that share the same node, or even the same port, begin handling more traffic. Logical interfaces can be easily migrated to a different port, node, or port and node to balance the network load within the cluster.

Load-sharing mirrors

An LS mirror is "in" the namespace at the same point as its read/write volume. So, if a volume has any LS mirrors, all client requests are sent, transparently to the clients, to an N-Blade or SCSI-Blade-selected LS mirror, rather than to the read/write volume. If the LS mirrors become out-of-sync with their read/write volumes, a client read request will get out-of-date information. LS mirrors are ideal for volumes that are read frequently and written infrequently

Load-sharing mirrors and read/write volumes

LS mirrors are separate flexible volumes that have a special relationship with one read/write volume. Mirroring in clustered Data ONTAP is asynchronous; that is, the cluster administrator is responsible for ensuring that the read/write volumes are being replicated to their mirrors when necessary, either manually, or by means of an automated (scheduled) replication. The read-only data is only as up-to-date as the administrator keeps it.

Load-sharing mirrors that are out of sync

An LS mirror is "in" the namespace at the same point as its read/write volume. So, if a volume has any LS mirrors, all client requests are sent, transparently to the clients, to an N-Blade or SCSI-Blade-selected LS mirror, rather than to the read/write volume. If the LS mirrors become out-of-sync with their read/write volumes, a client read request will get out-of-date information. LS mirrors are ideal for volumes that are read frequently and written infrequently.

Page 28:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Volume move

Clustered Data ONTAP enables you to move a volume from one aggregate or node to another within the same Vserver for capacity utilization, improved performance, and to satisfy SLAs. The volume move is a nondisruptive operation. During the volume movement process, the original volume is intact and available for clients to access. You can move a FlexVol volume to a different aggregate, node, or both within the same Vserver.

Typical NFS mount

The typical way of mounting the root volume of a namespace on an NFS client is by using the mount command keyword followed by the IP address of the data LIF, followed by a colon and the mount point. This mount accesses the read/write volume except unless the volume has an LS mirror. When a volume has an LS mirror and has replicated to it, all read and write access to that volume via that NFS mount is sent to one of the LS mirrors rather than to the read/write volume. For any volumes that do not have LS mirrors, their read/write volumes continue to be used.

NFS mount for writes

To allow an NFS request to go to the read/write volume after it has been replicated to an LS mirror, an additional mount would need to be done to use the /.admin path. For CIFS clients, an additional step is needed within the cluster itself. You must create an additional CIFS share that uses /.admin rather than / for its path. The clients that require read/write access must use that share.

LS mirror selection

When multiple LS mirrors exist for a volume, the node that receives the request gives preference to a local LS mirror. If there is no local LS mirror, Data ONTAP uses a roundrobin algorithm to choose which "remote" LS mirror receives the request. For volumes with high read traffic, a good practice is to have an LS mirror on every node so that all read requests are served locally. Mirroring of the root volumes of virtual servers is highly recommended and is considered a best practice.

DNS Load balancing

Clustered Data ONTAP also includes additional load-balancing functionality in the form of DNS load balancing. The DNS load-balancing feature works in two ways. With the first method, initial client mount requests are returned the IP address of the least-utilized node and port combination. With the second method, automatic periodic rebalancing occurs for LIFs that host NFS connections. The LIFs are automatically migrated throughout the cluster to achieve evenness in terms of the number of connections, CPU utilization, and throughput. Ultimately, this feature helps to balance the overall utilization of the cluster. It does not increase the performance of any one individual node; rather, it ensures that each node is more evenly used. The result is better performance utilization from the entire cluster. DNS load balancing also improves the simplicity of maintaining the cluster. Instead of manually deciding which LIFs are used when mounting a particular global namespace, the administrator can let the system dynamically determine which LIF is the most appropriate.

Page 29:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Module 8: data protection

Snapshot technology

A Snapshot copy consists of pointers to the blocks of storage that contain volume data. As data changes in the volume, a new block is written and the volume points to that new block, while the Snapshot copy continues to point to the original block of storage. As long as any Snapshot copy points to a block of data, it will not be deleted. The space that is used for Snapshot copies resides within the volume.

Page 30:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Snapshot copies of volumes can be created manually by the administrator at any time, or they can be created automatically based on Snapshot polices. To manually create a Snapshot copy, you need to provide the volume, the cluster virtual server associated with the volume, and a Snapshot copy name of your choosing. The process is very fast, since no data is copied at the time of the Snapshot copy creation.

Snapshot policies

Snapshot policies dictate when Snapshot copies are to be created and how many previous copies are to be kept. For example, a Snapshot policy might cause Snapshot copies of a particular volume to be created hourly, with the last six hourly Snapshot copies being kept. Snapshot copies could also be created nightly, keeping the two previous nights’ Snapshot copies, and weekly, keeping the two previous weeks' Snapshot copies. All told, that volume would have ten Snapshot copies at all times.

Multiple snapshot policies

Page 31:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Unless explicitly specified, each volume that is created will be assigned the Snapshot policy named default which, as its name implies, exists by default. You can create additional Snapshot policies with up to five different schedules and retention counts and assign them to volumes. Each volume can have at most one Snapshot policy, or a volume can have no Snapshot policy at all.

Restoring a volume from a snapshot copy

A Snapshot copy also acts as an inexpensive online backup. The data in a Snapshot copy can be accessed by NFS and CIFS clients, thus allowing individual users to restore their own data when needed. In the advanced privilege mode, a Snapshot copy can also be restored, which means that its contents instantly become the contents of its read/write volume. The current contents of the volume and all Snapshot copies "later" than the restored copy are removed. Obviously, this type of operation should not be taken lightly.

Data protection mirrors

In addition to supporting mirroring for load balancing, clustered Data ONTAP provides mirroring for data protection or disaster recovery. Called data protection, or DP mirrors, these "copies" of read/write volumes are disk-to-disk backup copies of their associated read/write volumes. In substance, DP mirrors are like load sharing, or LS mirrors: they are separate read-only flexible volumes that have associations with read/write volumes, they are mirrored asynchronously, and they are only as up-to-date as the administrator keeps them.

DP mirrors as online backups

Unlike LS mirrors, DP mirrors are not automatically included in their namespaces and are not accessed transparently by clients. DP mirrors are ideally suited for lower-cost, higher-capacity disks, since they are not typically accessed by high-performance applications. Also, DP mirrors are online, disk-based backups that can be used to easily restore files, directories, or entire volumes.

Creating and replicating a DP mirror

Page 32:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Like LS mirrors, DP mirrors must be created before they can be replicated. The first step in creating a mirror is to create a volume. After you create the volume, you can create the mirror. You must choose the name of the DP mirror and the particular aggregate in the cluster on which the DP mirror should reside. After you create the mirror, you must initialize it to sync it up to its read/write or source volume.

Cluster peer data protection

For greater data protection, you can create a data-protection mirror relationship between two clusters. In this type of intercluster data protection, the source volume is on one cluster, and the destination volume is on the other. When a disaster occurs, you can use data-protection mirror copies in the peer cluster to recover data. To create a dataprotection mirror relationship between cluster peers, you must first set up a cluster-peer relationship. The cluster-peering facility allows two clusters to coordinate and share resources between themselves. Before you create a cluster peer, you must ensure that the clocks on the clusters are synchronized by using the system node date show command. At most, the time difference between the two clusters can be 300 seconds. You start cluster-peer configuration by using the network interface create command to create an intercluster logical interface (LIF). Each cluster that participates in a clusterpeer relationship must have at least one intercluster LIF assigned to each node. You must create the intercluster LIF through the ports on the data network. Also ensure that each node in the cluster has an intercluster LIF assigned to each node.

Page 33:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Snapvault backups for clusters

NetApp’s flagship disk-to-disk backup solution is available with clustered Data ONTAP 8.2. SnapVault software leverages block-level incremental replication for a reliable, lowoverhead backup solution. It provides efficient data protection by copying only the data blocks that have changed since the last backup, instead of entire files. As a result, you can back up more often while reducing your storage footprint because no redundant data is moved or stored. With direct backups between NetApp systems, disk-to-disk vault backups minimize the need for external infrastructure and appliances. By default, vault transfers retain storage efficiency on disk and over the network, further reducing network traffic. You can also configure additional deduplication, compression, or both on the destination volume. However, if additional compression is configured on the destination volume, storage efficiencies from source to destination are not retained over the network. The key advantages of vault backups for clusters include: reduction of backup times from hours or days to minutes, 100% success rates for backup reliability, reduction of disk capacity requirements by 90% or more, simplified management across enterprise applications, and minimized network traffic. For more information about backing up FlexVol volumes to a backup vault, see the Clustered Data ONTAP Data Protection Guide.

Page 34:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

No snapvault commands

Because SnapVault was added to the new SnapMirror architecture and UI, there are no SnapVault commands. SnapVault functions are accomplished with SnapMirror commands. SnapVault is specified with the transfer type ―XDP.‖ Architecture, UI, and various behaviors were changed to accommodate scalability and server virtualization.

Basic snapmirror commands

In clustered Data ONTAP, SnapMirror technology is organized to include several types of replication relationships. ―DP‖ is for asynchronous data protection mirror relationships. ―LS‖ is for load-sharing mirror relationships. ―XDP‖ is for backup vault relationships. ―TDP‖ is for transition relationships from Data ONTAP running in 7-Mode to clustered Data ONTAP. ―RST‖ is a transient relationship for restore operations. SnapMirror commands, with the ―–type XDP‖ option, are used to configure SnapVault. The basic SnapMirror commands include snapmirror create, snapmirror initialize, snapmirror modify, snapmirror policy, snapmirror show, snapmirror update, and snapmirror restore.

Page 35:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

SMTape to seed baselines

In clustered Data ONTAP, you can attach a tape device to the source node and use SMTape in a process that is called ―tape seeding.‖ By using SMTape commands and a tape device, you can establish mirror and vault relationships for large source volumes without sending the initial baseline transfer from the source node to the destination node over the network. For more information on using vault backups, enroll in the web-based course Technical Overview of SnapVault on Clustered Data ONTAP.

NDMP Tape Backups

Clustered Data ONTAP supports tape backups by using NDMP. NDMP is an industry standard that allows third-party data management applications, such as Veritas NetBackup and IBM Tivoli Storage

Page 36:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖

Manager, to control backups and restores of NetApp storage appliances. The smallest unit that can be backed

Vserver-aware NDMP

Clustered Data ONTAP now enables NDMP to function at the virtual storage server (Vserver) level. Resources can be backed up, restored, and scoped at the Vserver level, including FlexVol volumes. Vserver-aware backups are critical for implementing multitenancy. For NDMP to be aware of a Vserver, NDMP data management application software must be enabled with cluster-aware backup extensions, and the NDMP service must be enabled on the Vserver. After the feature is enabled, you can back up and restore all volumes that are hosted across all nodes in the Vserver. An NDMP control connection can be established on different LIF types. An NDMP control connection can be established on any data or intercluster LIF that is owned by a Vserver that is enabled for NDMP and owns the target volume. If a volume and tape device share the same affinity, and if the data-management application supports the cluster-aware backup extensions, then the backup application can perform a local backup or restore operation—you do not need to perform a three-way backup or restore operation. Vserver-aware NDMP user authentication is integrated with the role-based access control mechanism. For more information about Vserver-aware NDMP and cluster-aware backup extensions, see the Clustered Data ONTAP Data Protection Tape Backup and Recovery Guide.

Page 37:    Web viewUse the intuitive installer to install OnCommand System Manager in only a few clicks. ... SnapVault is specified with the transfer type ―XDP.‖