Top Banner

of 26

White Paper - EMC FAST VP for Unified Storage Systems

Apr 03, 2018

Download

Documents

Bach Ngoc Dat
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    1/26

    White Paper

    AbstractThis white paper introduces EMC Fully Automated StorageTiering for Virtual Pools (FAST VP) technology and describes itsfeatures and implementation. Details on how to work with theproduct in the Unisphere operating environment are

    discussed, and usage guidance and major customer benefits arealso included.

    October 2011

    EMC FAST VP FOR UNIFIED STORAGE SYSTEMSA Detailed Review

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    2/26

    2EMC FAST VP for Unified Storage SystemsA Detailed Review

    Copyright 2011 EMC Corporation. All Rights Reserved.

    EMC believes the information in this publication is accurate asof its publication date. The information is subject to changewithout notice.

    The information in this publication is provided as is. EMCCorporation makes no representations or warranties of any kind

    with respect to the information in this publication, andspecifically disclaims implied warranties of merchantability orfitness for a particular purpose.

    Use, copying, and distribution of any EMC software described inthis publication requires an applicable software license.

    For the most up-to-date listing of EMC product names, see EMCCorporation Trademarks on EMC.com.

    VMware is a registered trademarks or trademarks of VMware,

    Inc. in the United States and/or other jurisdictions. All othertrademarks used herein are the property of their respectiveowners.

    Part Number h8058.3

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    3/26

    3EMC FAST VP for Unified Storage SystemsA Detailed Review

    Table of ContentsExecutive summary .......... .......... .......... ........ .......... .......... ........ .......... ........... 4

    Audience ............................................................................................................................ 4Introduction ................................................................................................. 5Storage tiers ................................................................................................ 6

    Extreme Performance Tier drives: Flash drives (for VNX and CX4 platforms) ........................ 6Performance Tier drives: SAS (VNX) and Fibre Channel (CX4) .............................................. 6Capacity Tier drives: NL-SAS (VNX) and SATA (CX4)............................................................. 7

    FAST VP operations .......... .......... .......... ........ .......... .......... ........ .......... ........... 8Storage pools ..................................................................................................................... 8FAST VP algorithm .............................................................................................................. 9

    Statistics collection ........................................................................................................ 9Analysis ....................................................................................................................... 10Relocation .................................................................................................................... 10

    Managing FAST VP at the storage pool level...................................................................... 10Physically allocated capacity versus reserved capacity................................................. 12Automated scheduler ................................................................................................... 13Manual relocation ........................................................................................................ 14

    FAST VP LUN management................................................................................................ 14Tiering policies ............................................................................................................. 15Initial placement .......................................................................................................... 15Common uses for Highest Available Tiering policy ........................................................ 16

    Using FAST VP for file .......... .......... .......... ........ .......... .......... ........ .......... ....... 17Management .................................................................................................................... 17Best practices for VNX for File ........................................................................................... 19Automatic Volume Manager rules ..................................................................................... 20

    General Guidance and recommendations ........................................................ 23FAST VP and FAST Cache................................................................................................... 23What drive mix is right for my I/O profile? ......................................................................... 24

    Conclusion ................................................................................................ 25References................................................................................................. 26

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    4/26

    4EMC FAST VP for Unified Storage SystemsA Detailed Review

    Executive summaryFully Automated Storage Tiering for Virtual Pools (FAST VP) can lower total cost ofownership (TCO) and increase performance by intelligently managing data placementat a sub-LUN level. When FAST VP is implemented, the storage system measures,

    analyzes, and implements a dynamic storage-tiering policy much faster and moreefficiently than a human analyst could ever achieve.

    Storage provisioning can be repetitive and time-consuming and produce uncertainresults. It is not always obvious how to match capacity to the performancerequirements of a workloads data. Even when a match is achieved, requirementschange, and a storage systems provisioning will require constant adjustment.

    Storage tiering puts drives of varying performance levels and cost into a storage pool.LUNs use the storage capacity they need from the pool, on the devices with therequired performance characteristics. FAST VP collects I/O activity statistics at a 1 GBgranularity (known as a slice). The relative activity level of each slice is used to

    determine which slices should be promoted to higher tiers of storage. Relocation isinitiated at the users discretion through either manual initiation or an automatedscheduler. Working at such a granular level removes the need for manual, resource-intensive LUN migrations while still providing the performance levels required by themost active dataset.

    FAST VP is a licensed feature available on EMC VNX series and CLARiiON CX4series platforms. The VNX series supports a unified approach to automatic tiering forboth file and block data. CX4 systems running release 30 and later are supported forthe tiering of block data. FAST VP licenses are available a la carte for both platforms,or as part of a FAST Suite of licenses that offers complementary licenses for

    technologies such as FAST Cache, Analyzer, and Quality of Service Manager.This white paper introduces the EMC FAST VP technology and describes its features,functions, and management.

    AudienceThis white paper is intended for EMC customers, partners, and employees who areconsidering using the FAST VP product. Some familiarity with EMC midrange storagesystems is assumed. Users should be familiar with the material discussed in thewhite papers Introduction to EMC VNX Series Storage Systems and EMC VNX VirtualProvisioning.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    5/26

    5EMC FAST VP for Unified Storage SystemsA Detailed Review

    IntroductionData has a lifecycle. As data progresses through its lifecycle, it experiences varyinglevels of activity. When data is created, it is typically heavily used. As it ages, it isaccessed less often. This is often referred to as being temporal in nature. FAST VP is asimple and elegant solution for dynamically matching storage requirements withchanges in the frequency of data access. FAST VP segregates disk drives into thefollowing three tiers:

    Extreme Performance Tier Flash drives Performance Tier Serial Attach SCSI (SAS) drives for VNX platforms and Fibre

    Channel (FC) drives for CX4 platforms

    Capacity Tier Near-Line SAS (NL-SAS) drives for VNX platforms and SATA drivesfor CX4 platforms

    You can use FAST VP to aggressively reduce TCO and/or to increase performance. Atarget workload that requires a large number of Performance Tier drives can beserviced with a mix of tiers, and a much lower drive count. In some cases, an almosttwo-thirds reduction in drive count is achieved. In other cases, performancethroughput can double by adding less than 10 percent of a pools total capacity inFlash drives.

    FAST VP has proven highly effective for a number of applications. Tests in OLTPenvironments with Oracle1 or Microsoft SQL Server2

    FAST VP can be used in combination with other performance optimization software,such as FAST Cache. A common strategy is to use FAST VP to gain TCO benefits whileusing FAST Cache to boost overall system performance. There are other scenarioswhere it makes sense to use FAST VP for both purposes. This paper discussesconsiderations for the best deployment of these technologies.

    show that users can lower theircapital expenditure (by 15 percent to 38 percent), reduce power and cooling costs(by over 40 percent), and still increase performance by using FAST VP instead of ahomogeneous drive deployment.

    The VNX series of storage systems delivers even more value over previous systems byproviding a unified approach to auto-tiering for file and block data. FAST VP isavailable on the VNX5300 and larger systems. Now, file data served by VNX DataMovers can also use virtual pools and the same advanced data services as blockdata. This provides compelling value for users who wish to optimize the use of high-performing drives across their environment.

    1Leveraging Fully Automated Storage Tiering (FAST) with Oracle Data base Applications EM Cwhite paper2EMC Tiered Storage for Microsoft SQL Server 2008Enabled by EMC Unified Storage and EMC Fully Automated StorageTiering (FAST) EM Cwhite paper

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    6/26

    6EMC FAST VP for Unified Storage SystemsA Detailed Review

    Storage tiersFAST VP can leverage two or all three storage tiers in a single pool. Each tier offersunique advantages in performance and cost.

    Extreme Performance Tier drives: Flash drives (for VNX and CX4 platforms)Flash drives are having a large impact on the external-disk storage system market.They are built on solid-state drive (SSD) technology. As such, they have no movingparts. The absence of moving parts makes these drives highly energy-efficient, andeliminates rotational latencies. Therefore, migrating data from spinning disks toFlash drives can boost performance and create significant energy savings.

    Tests show that adding a small (single-digit) percentage of Flash capacity to yourstorage, while using intelligent tiering products (such as FAST VP and FAST Cache),can deliver double-digit percentage gains in throughput and response timeperformance in some applications. Flash drives can deliver an order of magnitudebetter performance than traditional spinning disks when the workload is IOPS-intensive and response-time sensitive. They are particularly effective when small

    random-read I/Os with a high degree of parallelism are part of the I/O profile, as theyare in many transactional database applications. On the other hand, bandwidth-intensive applications perform only slightly better on Flash drives than on spinningdrives.

    Flash drives have a higher per-gigabyte cost than traditional spinning drives. Toreceive the best return, you should use Flash drives for data that requires fastresponse times and high IOPS. The best way to optimize the use of these high-performing resources is to allow FAST VP to migrate data to Flash drives at a sub-LUNlevel.

    Performance Tier drives: SAS (VNX) and Fibre Channel (CX4)Traditional spinning drives offer high levels of performance, reliability, and capacity.These drives are based on industry-standardized, enterprise-level, mechanical hard-drive technology that stores digital data on a series of rapidly rotating magneticplatters.

    The Performance Tier includes 10k and 15k rpm spinning drives, which are availableon all EMC midrange storage systems. They have been the performance medium ofchoice for many years. They also have the highest availability of any mechanicalstorage device. These drives continue to serve as a valuable storage tier, offeringhigh all-around performance including consistent response times, high throughput,and good bandwidth, at a mid-level price point.

    The VNX series and CX4 series use different drive attach technologies. VNX systemsuse multiple four-lane 6-Gb/s SAS back end while the CX4 series uses one or more 4-Gb/s Fibre Channel back end. Despite the differences in speed, the drive assemblieson both series are very similar. When sizing a solution with these drives, you need toconsider the rotational speed of the drive. However, the interconnect to the back enddoes not affect rule-of-thumb performance assumptions for the drive.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    7/26

    7EMC FAST VP for Unified Storage SystemsA Detailed Review

    This also applies to the different drive form factors. The VNX series offers 2.5-inch and3.5-inch form-factor SAS drives. The same IOPS and bandwidth performanceguidelines apply to 2.5-inch 10k drives and 3.5-inch 10k inch drives. The CX4 seriesoffers only 3.5 inch drives.

    FAST VP differentiates tiers by drive type. However, it does not take rotational speedinto consideration. We strongly encourage you to use one rotational speed for each

    drive type within a pool. If multiple rotational-speed drives exist in the array, multiplepools should be implemented as well.

    Capacity Tier drives: NL-SAS (VNX) and SATA (CX4)Using capacity drives can significantly reduce energy use and free up more expensive,higher-performance capacity in higher storage tiers. Studies have shown that 60percent to 80 percent of the capacity of many applications has little I/O activity.Capacity drives can cost about four times less than performance drives on a per-gigabyte basis, and a small fraction of the cost of Flash drives. They consume up to96 percent less power per TB than performance drives. This offers a compellingopportunity for TCO improvement considering both purchase cost and operationalefficiency.

    Capacity drives are designed for maximum capacity at a modest performance level.They have a slower rotational speed than Performance Tier drives. NL-SAS drives forthe VNX series have a 7.2k rotational speed, while SATA drives available on the CX4series come in 7.2k and 5.4k rpm varieties. The 7.2k rpm drives can deliverbandwidth performance within 20 percent to 30 percent of performance class drives.

    However, the reduced rotational speed is a trade-off for significantly larger capacity.For example, the current Capacity Tier drive offering is 2 TB, compared to the 600 GBPerformance Tier drives and 200 GB Flash drives. These Capacity Tier drives offerroughly half the IOPS/drive of Performance Tier drives. Future drive offerings will havelarger capacities, but the relative difference between disks of different tiers isexpected to remain roughly the same.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    8/26

    8EMC FAST VP for Unified Storage SystemsA Detailed Review

    Table 1. Feature tradeoffs for Flash, Performance, and Capacity drivesFlash drives Performance (SAS, FC) Capacity (NL-SAS, SATA)

    Poma

    High IOPS/GB and lowlatency

    Sole use response time

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    9/26

    9EMC FAST VP for Unified Storage SystemsA Detailed Review

    Figure 1. Heterogeneous storage pool conceptLUNs must reside in a pool to be eligible for FAST VP relocation. Pools support thickLUNs and thin LUNs. Thick LUNs are high-performing LUNs that use contiguous logicalblock addressing on the physical capacity assigned from the pool. Thin LUNs use acapacity-on-demand model for allocating drive capacity. Thin LUN capacity usage istracked at a finer granularity than thick LUNs to maximize capacity optimizations.FAST VP is supported on both thick LUNs and thin LUNs.

    RAID groups are by definition homogeneous and therefore are not eligible for sub-LUNtiering. LUNs in RAID groups can be migrated to pools using LUN Migration. For amore in-depth discussion of pools, please see the white paperEMC VNX VirtualProvisioning - Applied Technology.

    FAST VP algorithmFAST VP uses three strategies to identify and move the correct slices to the correcttiers: statistics collection, analysis, and relocation.

    Statistics collectionA slice of data is considered hotter(more activity) orcolder(less activity) thananother slice of data based on the relative activity level of the two slices. Activitylevel is determined by counting the number of I/Os for each slice. FAST VP maintainsa cumulative I/O count and weights each I/O by how recently it arrived. Thisweighting decays over time. New I/O is given full weight. After approximately 24

    hours, the same I/O carries about half-weight. After a week, the same I/O carries verylittle weight. Statistics are continuously collected (as a background task) for all poolLUNs.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    10/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    AnalysisOnce per hour, the collected data is analyzed. This analysis produces a rank orderingof each slice within the pool. The ranking progresses from the hottest slices to thecoldest slices relative to the other slices in the same pool. (For this reason, a hotslice in one pool may be comparable to a cold slice in another pool.) There is nosystem-level threshold for activity level. The most recent analysis before a relocation

    determines where slices are relocated.

    RelocationDuring user-defined relocation windows, 1 GB slices are promoted according to boththe rank ordering performed in the analysis stage and a tiering policy set by the user.Tiering policies are discussed in detail in the AST VP LUN Management section.During relocation, FAST VP relocates higher-priority slices to higher tiers; slices arerelocated to lower tiers only if the space they occupy is required for a higher-priorityslice. This way, FAST VP fully utilizes the highest-performing spindles first. Lower-tierspindles are utilized as capacity demand grows. Relocation can be initiated manuallyor by a user-configurable, automated scheduler.

    The relocation process targets to create 10percent free capacity in the highest tiers inthe pool. Free capacity in these tiers is used for new slice allocations of high priorityLUNs between relocations.

    Managing FAST VP at the storage pool levelFAST VP properties can be viewed and managed at the pool level. Figure 2shows thetiering information for a specific pool.

    Figure 2. Storage Pool Properties window

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    11/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    The Tier Status section of the window shows FAST VP relocation information specificto the pool selected. Scheduled relocation can be selected at the pool level from thedrop-down menu labeled Auto-Tiering. This can be set to either Scheduled or Manual.Users can also connect to the array-wide relocation schedule using the button locatedin the top right corner. This is discussed in the Automated schedulersection. DataRelocation Status displays what state the pool is in with regards to FAST VP. The

    Ready state indicates that relocation can begin on this pool at any time. The amountof data bound for a lower tier is shown next to Data to Move Down and the amount ofdata bound for a higher tier is listed next to Data to Move Up. Below that is theestimated time required to migrate all data within the pool to the appropriate tier.

    In the Tier Details section, users can see the exact distribution of their data. Thispanel shows all tiers of storage residing in the pool. Each tier then displays theamount of data to be moved down and up, the total capacity available (usercapacity), the consumed capacity, and total capacity available.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    12/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    Physically allocated capacity versus reserved capacity

    Figure 3. General and Tiering tabs of the Pool Properties dialogFigure 3shows the Tiering and General tabs in the Pool Properties dialog box. TheConsumed Capacity shown underTiering tab shows the physically allocated capacityfor each tier. It does not show the capacity reserved for thick LUNs.

    The Consumed Capacity shown under the General tab shows the physically allocatedcapacity(for all of the tiers) and the reserved capacity that has not yet been physicallyallocated. For this reason, the consumed capacity under the in the General tab isgreater than the sum of consumed capacities shown under the Tiering tab.For example, inFigure 3, the General tab shows total Consumed Capacity of 3944.7GB. However, if you add the Consumed capacities shown under the Tiering tab, the

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    13/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    total is 3218.3 GB . The difference indicates there are 1 GB slices of one or more thickLUNs that have yet been physically allocated (meaning they have not been written toyet).

    The same information is available through the CLI. The capacity figure returned for theCLI command storagepoollistrepresents the information in the General tab. Thecapacity figure returned for the command storagepool list disktype tiers

    represents the information in the Tiering tab.Automated schedulerThe scheduler launched from the Pool Properties dialog boxs Relocation Schedulebutton is shown inFigure 2. To this end, relocations can be scheduled to occurautomatically. It is recommended that relocations be scheduled during off-hours tominimize any potential performance impact the relocations may cause.Figure 3shows the Manage Auto-Tiering window.

    Figure 4. Manage Auto-Tiering windowThe data relocation schedule shown inFigure 4initiates relocations everyday at10:00 PM for a duration of eight hours (ending at 6:00 AM). You can select the dayson which the relocation schedule should run. In this example, relocations run sevendays a week, which is the default setting. From this status window, you can alsocontrol the data relocation rate. The default rate is set to Medium so as not tosignificantly impact host I/O. This rate will relocate data up to 300-400 GB per hour3

    If all of the data from the latest analysis is relocated within the window, it is possiblefor the next iteration of analysis and relocation to occur. This happens if the initialrelocation completes and the next analysis occurs within the window. After the newanalysis, slices will be moved according to the latest slice rankings. Any data not

    .

    3 This rate depends on system type, array utilization, and other tasks competing for array resources. High utilization rates mayreduce this relocation rate.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    14/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    relocated during the window is simply re-evaluated as normal with each subsequentanalysis.

    Manual relocationManual relocation is initiated by the user through either the Unisphere GUI or the CLI.It can be initiated at any time. When a manual relocation is initiated, FAST VP

    performs analysis on all statistics gathered, independent of its regularly scheduledhourly analysis, prior to beginning the relocation. This ensures that up-to-datestatistics and settings are properly accounted for prior to relocation. When manuallyinitiating a relocation, the users specifies both the relocation rate and duration to befor the relocation.

    Although the automatic scheduler is an array-wide setting, manual relocation isenacted at the pool level only. Common situations when users may want to initiate amanual relocation on a specific pool include the following:

    When reconfiguring the pool (for example, adding a new tier of drives) When LUN properties have been changed and the new priority structure needs to

    be realized immediately

    As part of a script for a finer-grained relocation scheduleFAST VP LUN managementSome FAST VP properties are managed at the LUN level.Figure 5shows the tieringinformation for a single LUN.

    Figure 5. LUN Properties window

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    15/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    The Tier Details section displays the current distribution of 1 GB slices within the LUN.The Tiering Policy section displays the available options for tiering policy.

    Tiering policiesThere are four tiering policies available within FAST VP:

    Auto-tier (recommended)

    Highest available tier

    Lowest available tier

    No data movement

    Auto-tierAuto-tieris the default setting for all pool LUNs upon their creation. FAST VP relocatesslices of these LUNs based on their activity level. Slices belonging to LUNs with theauto-tier policy have second priority for capacity in the highest tier in the pool afterLUNs set to the highest tier.

    Highest available tierThe highest available tiersetting should be selected for those LUNs which, althoughnot always the most active, require high levels of performance whenever they areaccessed. FAST VP will prioritize slices of a LUN with highest available tierselectedabove all other settings.

    Slices of LUNs set to highest tier are rank ordered with each other according toactivity. Therefore, in cases where the sum total of LUN capacity set to highest tier isgreater than the capacity of the pools highest tier, the busiest slices occupy thatcapacity.

    Lowest available tierLowest available tiershould be selected for LUNs that are not performance- orresponse-time-sensitive. FAST VP will maintain slices of these LUNs on the loweststorage tier available regardless of activity level.

    No data movementNo data movement may only be selected after a LUN has been created. FAST VP willnot move slices from their current positions once the no data movementselection hasbeen made. Statistics are still collected on these slices for use if and when the tieringpolicy is changed.

    Initial placementThe tiering policy chosen also affects the initial placement of a LUNs slices within theavailable tiers. Initial placement with the pool set to auto-tierwill result in the databeing distributed across all storage tiers available within the pool. The distribution is

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    16/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    based on available capacity in the pool. If 70 percent of a pools free capacity residesin the lowest tier, then 70 percent of the new slices will be placed in that tier.

    LUNs set to highest available tierwill have their component slices placed on thehighest tier that has capacity available. LUNs set to lowest available tierwill havetheir component slices placed on the lowest tier that has capacity available.

    LUNs with the tiering policy set to no data movementwill use the initial placementpolicy of the setting preceding the change to no data movement. For example, a LUNthat was previously set to highest tierbut is currently set to no data movementwillstill take its initial allocations from the highest tier possible.

    Common uses for Highest Available Tiering policyResponse time sensitive applicationsWhen a pool consists of LUNs with stringent response time demands, it is notuncommon for users to set all LUNs in the pool to highest available tier. That way,new LUN slices are allocated from the highest tier. Since new data is often the most

    heavily used, this provides the best performance for those slices. At the same time, ifall LUNs in the pool are set to highest tier, slices are relocated based on their relativeactivity to one another. Therefore, the TCO benefits of FAST VP are still fully realized.

    Large scale migrationsThe Highest Available Tier policy is useful for large scale migrations into a pool. Whenyou start the migration process, it is best to fill the highest tiers of the pool first. Thisis especially important for live migrations. Using the Auto-tier setting would placesome data in the Capacity tier. At this point, FAST VP has not yet run an analysis onthe new data, so it cannot distinguish between hotand colddata. Therefore, with theAuto-tier setting, some of the busiest data may be placed in the Capacity tier.

    In these cases, the target pool LUNs can be set to highest tier. That way, all data isinitially allocated to the highest tiers in the pool. As the higher tiers fill and capacityfrom the Capacity (NL-SAS) tier starts to be allocated, you can stop the migration andrun a manual FAST VP relocation. Assuming an analysis has had sufficient time to run,relocation will rank order the slices and move data appropriately. In addition, sincethe relocation will attempt to free 10 percent of the highest tiers, there is morecapacity for new slice allocations in those tiers.

    You continue this iterative process while you migrate more data to the pool, and thenrun FAST VP relocations when most of the new data is being allocated to the Capacitytier. Once all of the data is migrated into the pool, you can make any tiering

    preferences they see fit.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    17/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    Using FAST VP for fileIn the VNX Operating Environment for File 7, file data is supported on LUNs created inpools with FAST VP configured on both the VNX Unified and VNX gateways with EMCSymmetrix systems.

    ManagementThe process for implementing FAST VP for file begins by provisioning LUNs from apool with mixed tiers (or across pools for Symmetrix) that are placed in the protectedFile Storage Group. Rescanning the storage systems from the System tab in Unispherestarts a diskmark that makes the LUNs available to VNX file storage. The rescanautomatically creates a pool for file using the same name as the corresponding poolfor block. Additionally it will create a disk volume in a 1:1 mapping for each LUN thatwas added to the File Storage Group. A file system can then be created from the poolfor file on the disk volumes. The FAST VP policy that has been applied to the LUNspresented to file will operate as it does for any other LUN in the system, dynamicallymigrating data between storage tiers in the pool.

    Figure 6. FAST VP for fileFAST VP for file is supported in Unisphere and the CLI. All relevant Unisphereconfiguration wizards support a FAST VP configuration except for the VNX ProvisioningWizard. FAST VP properties can be seen within the properties pages of pools for file(seeFigure 7on page18), and property pages for volumes and file systems (seeFigure 8on page19) but they can only be modified through the block pool or LUNareas of Unisphere. On the File System Properties pages the FAST VP tiering policy is

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    18/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    listed in the Advanced Data Services section along with if thin, compression, ormirrored is enabled. If an Advanced Data Service is not enabled, it will not appear onthe screen. For more information on thin and compression reference the white papersEMC VNX Virtual Provisioning and EMC VNX Deduplication and Compression.

    Disk type options of the mapped disk volumes are as follows:

    LUNs in a storage pool with a single disk type Extreme Performance (Flash drives) Performance (10k and 15k rpm SAS drives) Capacity (7.2k rpm NL-SAS)

    LUNs in a storage pool with multiple disk types (used with FAST VP) Mixed

    LUNs that are mirrored (mirrored means remote mirrored through MirrorView orRecoverPoint)

    Mirrored_mixed Mirrored_performance Mirrored_capacity Mirrored_Extreme Performance

    Figure 7. File Storage Pool Properties window

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    19/26

    1EMC FAST VP for Unified Storage SystemsA Detailed Review

    Figure 8. File System Properties windowBest practices for VNX for File The entire pool should be allocated to file systems. The entire pool should use thick LUNs only. Recommendations for Thick LUNs:

    There should be one Thick LUN per physical disk in pool. Pool LUN count should divisible by five to facilitate striping. Balance SP ownership.

    All LUNs in the Pool should have the same tiering policy. This is necessary tosupport slice volumes.

    Prior to VNX OE for File V7.0.3x, the use of AVM was not optimal due to the factthat AVM would concatenate volumes as opposed to striping and slicing. For thisreason, wherever possible you should upgrade to V7.0.3x prior to

    implementation. If this is not possible, Manual Volume Management can providereasonable performance configurations by creating stripe volumes manually andbuilding user defined pools.

    EMC does not recommend that you use block thin provisioning or compression onVNX LUNs used for file system allocations. Instead, you should use the File sidethin provisioning and File side deduplication (which includes compression).

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    20/26

    2EMC FAST VP for Unified Storage SystemsA Detailed Review

    VNX file configurations, for both VNX and VNX gateways with Symmetrix, will notexpressly forbid mixing LUNs with different data service attributes, although userswill be warned that it is not recommended to mix due to the impact of spreading afile system across, for example, a thin and a thick LUN or LUNs with differenttiering policies.

    Note: VNX file configurations will not allow mixing of mirrored and non-mirrored typesin a pool If you try to do this, the disk mark will fail.

    Where the highest levels of performance are required, maintain the use of RAIDGroup LUNs for File.

    Automatic Volume Manager rulesAutomatic Volume Manager (AVM) rules are different when creating a file system withunderlying pool LUNs as opposed to file systems with underlying RAID group LUNs.The rules for AVM with underlying pool LUNs are as follow:

    For VNX, the following rules apply:

    Primary VNX Algorithm:1. Striping is tried first. If disk volumes cannot be striped, then concatenation is

    tried.

    2. AVM checks for free disk volumes: If there are no free disk volumes and the slice option is set to No, there is not

    enough space available and the request fails.

    If there are free disk volumes:I. AVM sorts them by thin and thick disk volumes.

    II. AVM sorts the thin and thick disk volumes into size groups.3. AVM first checks for thick disk volumes that satisfy the target number of disk

    volumes to stripe. Default=5.

    4. AVM tries to stripe five disk volumes together, with the same size, same dataservice policies, and in an SP-balanced manner. If five disk volumes cannot befound, AVM tries four, then three, and then two.

    5. AVM selects SP-balanced disk volumes before selecting the same data servicepolicies.

    6. If no thick disk volumes are found, AVM then checks for thin disk volumes thatsatisfy the target number.

    7. If the space needed is not found, AVM uses the VNX for block secondary pool-based file system algorithm to look for the space.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    21/26

    2EMC FAST VP for Unified Storage SystemsA Detailed Review

    Note: For file system extension, AVM always tries to expand onto the existing volumesof the file system. However, if there is not enough space to fulfill the size request onthe existing volumes, additional storage is obtained using the above algorithm andAVM attempts to match the data service policies of the first used disk volume of thefile system. All volumes mentioned above, whether a stripe or a concatenation, aresliced by default.

    Secondary VNX Algorithm (used when space needed is not found in the PrimaryAlgorithm):1. Concatenation is used. Striping is not used.2. Unless requested, slicing will not be used.3. AVM checks for free disk volumes, and sorts them by Thin and Thick disk volumes.4. AVM checks for free disk volumes:

    If there are no free disk volumes and the slice option is set to no, there is notenough space available and the request fails.

    If there are free disk volumes:I. AVM first checks for Thick disk volumes that satisfy the size request (equal

    to or greater than the file system size).

    II. If not found, AVM then checks for Thin disk volumes that satisfy the sizerequest.

    III. If still not found, AVM combines Thick and Thin disk volumes to find onesthat satisfy the size request.

    5. If one disk volume satisfies the size request exactly, AVM takes the selected diskvolume and uses the whole disk to build the file system.

    6. If a larger disk volume is found that is a better fit than any set of smaller disks,then AVM uses the larger disk volume.

    7. If multiple disk volumes satisfy the size request, AVM sorts the disk volumes fromsmallest to largest, and then sorts in alternating SP A and SP B lists. Starting withthe first disk volume, AVM searches through a list for matching data services untilthe size request is met. If the size request is not met, AVM searches again butignores the data services.

    For VNX gateway with Symmetrix, the following rules apply:

    1. Unless requested, slicing will not be used.2. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes

    for the purpose of striping together the same type of disk volumes:

    If there are no free disk volumes and the slice option is set to no, there is notenough space available and the request fails

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    22/26

    2EMC FAST VP for Unified Storage SystemsA Detailed Review

    If there are free disk volumes:I. AVM first checks for a set of eight disk volumes.

    II. If a set of eight is not found, AVM then looks for a set of four disk volumes.III. If a set of four is not found, AVM then looks for a set of two disk volumes.IV. If a set of two is not found, AVM finally looks for one disk volume.

    3. When free disk volumes are found:a. AVM first checks for thick disk volumes that satisfy the size request, which can

    be equal to or greater than the file system size. If thick disk volumes areavailable, AVM first tries to stripe the thick disk volumes that have the samedisk type. Otherwise, AVM stripes together thick disk volumes that havedifferent disk types.

    b. If thick disks are not found, AVM then checks for thin disk volumes that satisfythe size request. If thin disk volumes are available, AVM first tries to stripe thethin disk volumes that have the same disk type, where same means the

    single disk type of the pool in which it resides. Otherwise, AVM stripestogether thin disk volumes that have different disk types.

    c. If thin disks are not found, AVM combines thick and thin disk volumes to findones that satisfy the size request.

    4. If neither thick nor thin disk volumes satisfy the size request, AVM then checks forwhether striping of one same disk type will satisfy the size request, ignoringwhether the disk volumes are thick or thin.

    5. If still no matches are found, AVM checks whether slicing was requested.a. If slicing was requested, then AVM checks whether any stripes exist that

    satisfy the size request. If yes, then AVM slices an existing stripe.b. If slicing was not requested, AVM checks whether any free disk volumes can

    be concatenated to satisfy the size request. If yes, AVM concatenates diskvolumes, matching data services if possible, and builds the file system.

    6. If still no matches are found, there is not enough space available and the requestfails.

    Managing Volumes and File Systems with VNX AVMprovides further information onusing AVM with mapped pools.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    23/26

    2EMC FAST VP for Unified Storage SystemsA Detailed Review

    General Guidance and recommendationsThe following table displays the total number of LUNs that can be set to leverage FASTVP based on the array model. These limits are the same as the total number of poolLUNs per system. Therefore, all pool LUNs in any given system can leverage FAST VP.

    Table 2. FAST VP LUN limitsArray model VNX5300CX4-120 VNX5500

    CX4-240 VNX5700CX4-480 VNX7500

    CX4-960Maximum number of pool LUNs 512 1,024 2,048 2,048

    FAST VP and FAST CacheFAST Cache allows the storage system to provide Flash drive class performance to themost heavily accessed chunks of data across the entire system. FAST Cache absorbsI/O bursts from applications, thereby reducing the load on back-end hard disks. Thisimproves the performance of the storage solution. For more details on this feature,refer to the EMC CLARiiON, Celerra Unified, and VNX FAST Cache white paperavailableon Powerlink.

    The following table compares the FAST VP and FAST Cache features.

    Table 3. Comparison between the FAST VP and FAST Cache featuresFAST Cache FAST VPEnables Flash drives to be used toextend the ex isting caching capacity of

    the storage system.

    Leverages pools to provide sub-LUN tiering, enabling theutilization of multiple tiers of storage simultaneously

    Has finer gra nularity 64 KB Less granular compared to FAST Cache 1 GB

    Copies data from HDDs to Flash driveswhen they get accessed frequently

    Moves data between different storage tiers based on aweighted average of access statistics collected over aperiod of time

    Continuously a dapts to changes inworkload

    Uses a relocation process to periodically make storagetiering adjustments. Default setting is one, 8 hour

    relocation per day.

    Is designed primarily to improveperformance

    While it can improve performance, it is primarily designedto improve ease of use and reduce TCO

    FAST Cache and the FAST VP sub-LUN tiering features can be used together to yieldhigh performance and improved TCO for the storage system. As an example, inscenarios where limited Flash drives are available, they can be used to create FAST

    Cache and the FAST VP can be used on a two-tier, Performance and Capacity pool.From a performance point of view, FAST Cache will dynamically provide performancebenefit to any bursty data while FAST VP will move warmer data to Performance drivesand colder data to Capacity drives. From a TCO perspective, FAST Cache with a smallnumber of Flash drives serves the data that is accessed most frequently, while FASTVP can optimize disk utilization and efficiency.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    24/26

    2EMC FAST VP for Unified Storage SystemsA Detailed Review

    As a general rule, FAST Cache should be used in cases where storage systemperformance needs to be improved immediately for burst-prone data. On the otherhand, FAST VP optimizes storage system TCO as it moves data to the appropriatestorage tier based on sustained data access and demands over time. FAST Cachefocuses on improving performance while FAST VP focuses on improving TCO. Bothfeatures are complementary to each other and help in improving performance and

    TCO.The FAST Cache feature is storage-tier-aware and works with the FAST VP to make surethat the storage system resources are not wasted by unnecessarily copying data toFAST Cache if it is already ona Flash drive. If FAST VP moves a chunk of data to theExtreme Performance Tier (which consists of Flash drives) , FAST Cache will notpromote that chunk of data into FAST Cache, even if FAST Cache criteria is met forpromotion. This ensures that the storage system resources are not wasted in copyingdata from one Flash drive to another.

    A general recommendation for the initial deployment of Flash drives in a storagesystem is to use them for FAST Cache. However, in certain cases FAST Cache does not

    offer the most efficient use of Flash drives. FAST Cache tracks I/Os that are smallerthan 128 KB, and requires multiple hits to 64 KB chunks to initiate promotions fromrotating disk to FAST Cache. Therefore, I/O profiles that do not meet this criteria arebetter served by Flash in a pool or RAID group.

    What drive mix is right for my I/O profile?As previously mentioned, it is common for a small percentage of overall capacity tobe responsible for most of the I/O activity. This is known as skew. Analysis of an I/Oprofile may indicate that 85 percent of the I/Os to a volume only involve 15 percent ofthe capacity. The resulting active capacity is called the working set. Software likeFAST VP and FAST Cache keep the working set on the highest-performing drives.

    It is common for OLTP environments to yield working sets of 20 percent or less of theirtotal capacity. These profiles hit the sweet spot for FAST and FAST Cache. The whitepapers Leveraging Fully Automated Storage Tiering (FAST) with Oracle DatabaseApplicationsand EMC Tiered Storage for Microsoft SQL Server 2008Enabled by EMCUnified Storage and EMC Fully Automated Storage Tiering (FAST)discuss performanceand TCO benefits for several mixes of drive types.

    Other I/O profiles, like Decision Support Systems (DSS), may have much largerworking sets. In these cases, FAST VP may be used to deploy Flash drives becauseDSS workloads are not typically FAST Cache-friendly. Capacity Tier drives may be usedto lower TCO. The white paperLeveraging EMC Unified Storage System Dynamic LUNs

    for Data Warehouse Deploymentson Powerlink offers analysis on the use of storagepools and FAST VP.

    At a minimum, the capacity across the Performance Tier and Extreme Performance Tier(and/or FAST Cache) should accommodate the working set. However, capacity is notthe only consideration. The spindle count of these tiers needs to be sized to handlethe I/O load of the working set. Basic techniques for sizing disk layouts based on

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    25/26

    2EMC FAST VP for Unified Storage SystemsA Detailed Review

    IOPS and bandwidth are available in the EMC VNX Fundamentals for Performance andAvailability white paperon Powerlink.

    As discussed above, using FAST Cache as your first line of Flash-drive deployment is apractical approach when the I/O profile is amenable to its use. In circumstanceswhere an I/O profile is not FAST-Cache friendly, Flash can be deployed in a pool orRAID group instead.

    Performance Tier drives are versatile in handling a wide spectrum of I/O profiles.Therefore, we highly recommend that you include Performance Tier drives in eachpool. FAST Cache can be an effective tool for handling a large percentage of activity,but inevitably, there will be I/Os that have not been promoted or are cache misses. Inthese cases, Performance Tier drives offer good performance for those I/Os.

    Performance Tier drives also facilitate faster promotion of data into FAST Cache byquickly providing promoted 64 KB chunks to FAST Cache. This minimizes FAST Cachewarm-up time as some data gets hot and other data goes cold. Lastly, if FAST Cache isever in a degraded state due to a faulty drive, the FAST Cache will become read only. Ifthe I/O profile has a significant component of random writes, these are best served

    from Performance Tier drives as opposed Capacity drives.

    Capacity drives can be used for everything else. This often equates to comprising60 percent to 80 percent of the pools capacity. Of course, there are profiles with lowIOPS/GB and or sequential workload that may result in the use of a higher percentageof Capacity Tier drives.

    EMC Professional Services and qualified partners can be engaged to assist withproperly sizing tiers and pools to maximize investment. They have the tools andexpertise to make very specific recommendations for tier composition based on anexisting I/O profile.

    ConclusionThrough the use of FAST VP, users can remove complexity and management overheadfrom their environments. FAST VP utilizes Flash, performance, and capacity drives (orany combination thereof) within a single pool. LUNs within the pool can thenleverage the advantages of each drive type at the 1 GB slice granularity. This sub-LUN-level tiering ensures that the most active dataset resides on the best-performingdrive tier available, while maintaining infrequently used data on lower-cost, high-capacity drives.

    Relocations can occur without user interaction on a predetermined schedule, making

    FAST VP a truly automated offering. In the event that relocation is required on-demand, FAST VP relocation can be invoked through Unisphere on an individual pool.

    Both FAST VP and FAST Cache work by placing data segments on the mostappropriate storage tier based on their usage pattern. These two solutions arecomplementary because they work on different granularity levels and time tables.

  • 7/29/2019 White Paper - EMC FAST VP for Unified Storage Systems

    26/26

    Implementing both FAST VP and FAST Cache can significantly improve performanceand reduce cost in the environment.

    ReferencesThe following white papers are available on Powerlink:

    EMC Unified Storage Best Practices for Performance and Availability CommonPlatform and Block Applied Best Practices

    EMC VNX Virtual Provisioning EMC CLARiiON Storage System Fundamentals for Performance and Availability EMC CLARiiON, Celerra Unified, and VNX FAST Cache EMC Unisphere: Unified Storage Management Solution An Introduction to EMC CLARiiON and Celerra Unified Platform Storage Device

    Technology

    EMC Tiered Storage for Microsoft SQL Server 2008Enabled by EMC UnifiedStorage and EMC Fully Automated Storage Tiering (FAST)

    Leveraging Fully Automated Storage Tiering (FAST) with Oracle DatabaseApplications

    Leveraging EMC Unified Storage System Dynamic LUNs for Data WarehouseDeployments