© Hitachi Data Systems Corporation 2013. All Rights Reser HITACHI DYNAMIC TIERING (HDT): AN IN-DEPTH LOOK AT MANAGING HDT AND BEST PRACTICES BRANDON LAMBERT, SR. MANAGER MICHAEL ROWLEY, PRINCIPAL CONSULTANT AMERICAS SOLUTIONS AND PRODUCTS
Nov 03, 2014
11 © Hitachi Data Systems Corporation 2013. All Rights Reserved.
HITACHI DYNAMIC TIERING (HDT): AN IN-DEPTH LOOK AT MANAGING HDT AND BEST PRACTICESBRANDON LAMBERT, SR. MANAGERMICHAEL ROWLEY, PRINCIPAL CONSULTANT AMERICAS SOLUTIONS AND PRODUCTS
2
Hitachi Dynamic Tiering simplifies storage administration by automatically optimizing data placement in 1, 2, or 3 tiers of storage that can be defined and used within a single virtual volume. Tiers of storage can be made up of internal or external (virtualized) storage, and use of HDT can lower capital costs. Simplified and unified management of HDT allows for lower operational costs and reduces the challenges of ensuring applications are placed on the appropriate classes of storage.
By attending this webcast, you will
• Hear about what makes Hitachi Dynamic Tiering a unique storage management tool that enables storage administrators to meet performance requirements at lower costs than traditional tiering methods.
• Understand various strategies to consider when monitoring application performance and relocating pages to appropriate tiers without manual intervention.
• Learn how to use Hitachi Command Suite (HCS) to manage, monitor and report on an HDT environment, and how HCS manages related storage environments.
HITACHI DYNAMIC TIERING (HDT): AN IN-DEPTH LOOK AT MANAGING HDT AND BEST PRACTICES, PART 2 OF 2
WEBTECH EDUCATIONAL SERIES
3
The HDT lifecycle
HDT performance basics
HDT basic guidelines
Monitoring HDT with Hitachi Tuning Manager
AGENDA
4
The HDT lifecycle
HDT performance basics
HDT basic guidelines
Monitoring HDT with Hitachi Tuning Manager
AGENDA
5
HDT LIFECYCLE
Turn on HDT Heat
Maps
Design HDT Environment
Plan ImplementDaily Operations (Monitor/Report)
Build HDT Pools
Migrate Into Pools
Add Daily Monitoring
Tune Configuration(HW, Cycle,
Apps, Policies)
Historical Performance
Trend
Consider HW Limits
What to Monitor
and Cycle
Capacity Trend/Plan
Relocation Reporting (Gradual or Erratic)
Performance Trend/
Troubleshoot
Adjust Configuration
Basic Trend Report (Cap./Perf.)
Implement
Cont. Daily
Monitor (Alarm)
6
The HDT lifecycle
HDT performance basics
HDT basic guidelines
Monitoring HDT with Hitachi Tuning Manager
AGENDA
7
HDT PERFORMANCE MONITORING
Back-end I/O (read and write) counted per page during the monitor period
Monitor ignores “RAID I/O” (parity I/O)
Count of IOPH for the cycle (period mode)or a weighted average (continuous mode)
HDT orders pages by counts high to lowto create a distribution function‒ IOPH vs. terabytes
Monitor analysis is performed to determine the IOPH values that separate the tiers
0
5
10
15
20
25
Page 1 Page 999
IOPH DP-VOLs
0
5
10
15
20
25
Capacity 1 Capacity nnn
Pool
SN2
Monitoring
Aggregate the Data
Analysis
8
POOL TIER PROPERTIES
Can display just the performance graph for a tiering policy
What is being used now in the pool in terms of capacity and performance
The I/O distribution across all pages in the pool. Combined with the tier range, HDT decides where the pages should go
9
CAUTION ABOUT PERFORMANCE UTILIZATION (P%)
P% is only an approximation of tier utilization It is based on assumptions of read/write ratios (50-50) P% does not factor in RAID I/O (parity I/O) It should not be used to calculate I/O counts It should not be used to determine relative utilization at lower P% values
- P% is not accurate enough for comparing small differences P% is only used to signal that a tier may be overutilized P% cannot absolutely report that a problem exists Ignore P% unless it is over 60% Prior to V04+a continuous modes, P% uses the weighted average IOPH
values. In V04+A continuous mode uses the monitoring result of the last cycle (period mode value) for calculating P% performance utilization
A better measure of actual tier utilization is to use parity group utilization
10
TIER RANGE VALUES
Tier range values dynamically change according to workload
Tier range values are always calculated to keep upper tiers generally full
Pages compete for upper tiers. Pages can be pushed down if more aggressive workloads come on the scene
Pages that do not remain “hot enough” (competitive) will demote
Newly active data on dormant (or new) pages or migrated volumes will need protection
- Policy settings work well
11
The HDT lifecycle
HDT performance basics
HDT basic guidelines
Monitoring HDT with Hitachi Tuning Manager
AGENDA
12
CHOOSING A MONITOR STRATEGY: RECOMMENDED START POINT
Start with continuous mode
Start with automatic
Start with 8-hour
Investigate Hitachi Tiered Storage Manager for custom scheduling of monitor and relocation cycles
13
CHOOSING A MONITOR STRATEGY
Use tiering policies to protect or restrict tier use
3-tier pool‒ Liberally use ALL
‒ Use level 4 if Tier 1 use should be restricted (not used) and when Tier 2 is well configured
‒ Note that incorrectly using level 5 will cause performance issues
‒ Levels 1 and 2 can cause overcommitment (and waste) of Tier 1
‒ Levels 2, 3, and 4 can overcommit Tier 2
2-tier pool‒ Liberally use ALL
‒ Liberally use level 4 (or 3 or 5) if Tier 1 use should be restricted (not used) and when Tier 2 is well configured
14
RELOCATION RATES
Standard relocation throughput is about 3TB/day
Write pending and MP utilization rate influences the pace of page relocation‒ When WritePending is 55%, 20-second wait is inserted per page
‒ MP utilization rate influences pacing‒ 60% or more: 6 pages or less in 5 seconds‒ 50-60%: 8 pages or less in 5 seconds‒ 40-50%: 10 pages or less in 5 seconds‒ 30-40%: 12 pages or less in 5 seconds‒ 30% or less: Unlimited
When SOM904 is ON, only 1 page is relocated per second ‒ For example, when page migration takes 600 ms, the next page
migration starts after sleep for 400 ms (1,000 ms - 600 ms)
‒ If page migration takes 1 second or more, the next page migration starts without sleep procedure.
‒ HDT relocate pages in less than 40MB/sec
15
HDT TUNING SUMMARY
HDT tunes tier ranges dynamically (neither Hitachi Tuning Manager ((HTnM)) or Hitachi Tiered Storage Manager ((HTSM)) is used)
If a tier P% approaches 60% utilization, we aim to move I/O down a tier
‒ 60% sustained I/O to accommodate peaks and prevent queuing
‒ Tier range is increased, reducing the tier’s utilized capacity. All of the tier capacity will not be used – but that is better than overloading the tier with too much IOPH
If all tiers are over (60%/60%/60%), the pool is over utilized − tier ranges are increased again to share the problem
You can’t “lose storage” but you might put more I/O into a pool than it can handle
You should be looking for these situations, evaluating the issues, and adding capacity
16
HDT TUNING SUMMARY
Do not underestimate the importance of Tier-3 performance‒ HDT will relocate dormant pages to Tier 3. If these pages become
active, Tier 3 must perform well enough to cope with some host I/O and relocation I/O
Tiering policy can be used to help or hinder ‒ Helps
Use level 3 or 4 to stage data into Tier 2 before it is needed Use level 1 or 2 to stage important data before “Mondays”
‒ Hinders When using level 1-4 on dormant data Leaving level 1 or 2 set too long
T% and R% should not be changed unless needed to artificially reduce capacity
17
TIER SIZE MATTERS
18
POOL CONSUMPTION MATTERS
19
The HDT lifecycle
HDT performance basics
HDT basic guidelines
Monitoring HDT with Hitachi Tuning Manager
AGENDA
20
HDT AND HTNM
All HDT-level reporting in HTnM is in Performance Reporter‒ Some point-in-time metrics are fed into Mobility
All HDT reports are custom created
A set of custom reports are included with this presentation
WHAT'S AVAILABLE TODAY
21
HDT AND HTNM
Nine tables in Performance Reporter for HDP/HDT are
‒ VVOL Tier Type Configuration (Individual DP-VOL capacity info by tier)
‒ VVOL Tier Type I/O Information (Individual DP-VOL performance metrics by tier)
‒ HDP Pool Configuration (Design and Capacity Info)
‒ Pool Summary (Total Pool Performance Info)
‒ Pool Tier Type Configuration (HDT Pool Design and Capacity)
‒ Pool Page Relocation (HDT Pool Relocation Info)
‒ Pool Tier Page Relocation (HDT Tier Relocation Info)
‒ Pool Tier Type IO Information (HDT Tier Performance Info)
‒ Pool Tier Type Operation Status (HDT Tier Performance Info)
WHAT'S AVAILABLE TODAY
22
HDT CAPACITY MANAGEMENT AT THE POOL
Time Stamp for Collection (Collected Every 8 Hours by Default)
Type of Pool (HDP/HDT)Pool Physical CapacityTotal Provisioned CapacityFree Physical CapacityUsed Physical CapacityPhysical Used %
HDP/HDT POOL UTILIZATION – USEFUL FIELDS
23
HDT CAPACITY MANAGEMENT AT THE POOL
Reports on both HDP and HDT pools
Capacity metrics are in gigabytes
Using historical information in HTnM, reports detailing growth of the pool can be used for capacity trend analysis
Alerts can be set for pools including usage percentage and status as warnings for out-of-space conditions beyond alerts set in HDvM and SN2
HDP/HDT POOL UTILIZATION – NOTES
24
HDT CAPACITY MANAGEMENT AT THE POOL
HDT POOL UTILIZATION BY TIER– USEFUL FIELDS
Media composition by TierFree Tier Physical CapacityUsed Tier Physical CapacityTotal Tier Physical Capacity Physical Tier Capacity Used
as % of PoolPhysical Tier Capacity Used
as Percentage of Tier
25
HDT CAPACITY MANAGEMENT AT THE POOL
Capacity metrics are in gigabytes
Trend of size and usage of storage tiers can be obtained by trending total capacity, used capacity, usage percentage in pool, or usage percentage in tier
Alerts can be set for lower tiers to flag high capacity usage percentage for review. Low tiers by default have lowest utilization percentage due to HDT standard of writing to higher tiers first
HDT POOL UTILIZATION BY TIER – NOTES
26
HDT CAPACITY AT THE DP-VOL
HDT V-VOL UTILIZATION BY TIER – USEFUL FIELDS
LDEV NumberTier NumberSize of volume in tier (in MB)Percent of used V-VOL capacity on each tier.
27
HDT CAPACITY AT THE DP-VOL
Large capacity proportion (85+%) in single tier may indicate HDT is not suitable for data type. Investigation of data workload/type and comparison to other workloads in pool is suggested
Large capacity proportion in high tier may indicate a need for verification of data type on volume‒ If high-performing but low-value data, use of tiering policy
may eliminate waste
A historical trend of capacity movement between tiers on V-VOL may indicate a poor candidate for HDT or a different HDT architecture required for volume
HDT V-VOL UTILIZATION BY TIER - NOTES
28
HDT PERFORMANCE AT THE POOL
HDT POOL IOPS BY TIER – USEFUL FIELDS
Average IOPS Per Tier
29
HDT PERFORMANCE AT THE POOL
Data is collected every 15 minutes
IOPS per tier can be used for historical data analysis and trending‒ View IOPS growth per tier over time
‒ Find tiers that are busy during specific cycles‒ Batch vs. transaction
‒ Backups
‒ Database maintenance
‒ Business vs. after hours
HDT POOL IOPS BY TIER – NOTES
30
HDT PERFORMANCE AT THE POOL
HDT POOL PERFORMANCE BY TIER – USEFUL FIELDS
Average IOPS Per TierAverage IOPS Percentage Utilization Per Tier
31
HDT PERFORMANCE AT THE POOL
Data is collected every monitoring period
Average IOPS utilization percentage per tier is number of IOPS processed per tier compared with total IOPS per tier possible as defined by storage array (same as performance utilization (P%) in SN2)
Allows trending of IOPS and tier utilization over time
Alerts can be set on utilization to monitor when tier gets to standard load-sharing thresholds (60%) or overutilization point
HDT POOL PERFORMANCE BY TIER – NOTES
32
HDT PERFORMANCE AT THE POOL
Average IOPS utilization percentage provides general insight on whether HDT design is correct
‒ High IOPS utilization in Tier 1 or 2 indicates Tier 1 may need additional drives/parity groups to support additional performance needs
‒ High IOPS utilization in Tier 3 indicates Tier 1 and/or 2 may need additional drives/parity groups to support additional performance needs
‒ Low IOPS utilization in a single tier indicates tier may be too large and can be maintained/reduced in subsequent pool changes
‒ Balanced IOPS utilization under 60% indicates a well-designed HDT pool
‒ Balanced IOPS utilization over 60% indicates a pool running low on performance growth capability
HDT POOL PERFORMANCE BY TIER – NOTES
33
HDT PERFORMANCE AT THE DP-VOL
HDT V-VOL PERFORMANCE BY TIER – USEFUL FIELDS
Average IOPS Per Tier Per V-VOL
34
HDT PERFORMANCE AT THE DP-VOL
Metrics collected every 15 minutes
Provides insight into how IOPS count is broken out by tier of storage per V-VOL
Can be used to look at specific V-VOL or application use of tiers of storage over historical periods
High IOPS count in low tier of storage with high concentration of pages in same tier may indicate additional research to determine if volume or application is good candidate for HDT ‒ Assuming tiering policy is not in place for V-VOL
HDT V-VOL PERFORMANCE BY TIER – NOTES
35
HDT RELOCATION MONITORING
HDT POOL RELOCATION STATUS – USEFUL FIELDS
Pages Moved During Relocation Cycle
Relocation Progress Percentage During Relocation Cycle
Relocation Start and Stop Time
36
HDT RELOCATION MONITORING
Collected after each relocation period
Progress percentage can be used in alerting customer if relocation didn’t complete
Relocation start and end times define how long a relocation cycle takes
Relocation cycles that barely finish during relocation window may indicate collection/relocation configuration needs adjustment
Time between relocation start and end divided by number of pages moved indicates page movement speed (rule of thumb is ~35 MB/sec)
HDT POOL RELOCATION STATUS - NOTES
37
HDT RELOCATION MONITORING
HDT POOL TIER RELOCATION INFORMATION – USEFUL FIELDS
Promoted and Demoted Pages by Tier
38
HDT RELOCATION MONITORING
Collected after each relocation period
Promoted pages defines how many pages were promoted out of this tier
Demoted pages defines how many pages were demoted out of this tier
HDT POOL TIER RELOCATION INFORMATION − NOTES
3939
QUESTIONS AND DISCUSSION
40
UPCOMING WEBTECHS
2013 WebTechs
‒ Upgrade Your Enterprise with Hitachi Data Systems, December 4, 9 a.m. PT, noon ET
‒ 2014 schedule to be published soon.
Check www.hds.com/webtech for Links to the recording, the presentation, and Q&A (available next
week) Schedule and registration for upcoming WebTech sessions Questions will be posted in the HDS Community:
http://community.hds.com/groups/webtech
4141
THANK YOU