Measuring Interface Latencies for SAS, Fibre Channel and iSCSI Dennis Martin Demartek President Flash Memory Summit 2011 Santa Clara, CA 1
Measuring Interface Latencies forSAS, Fibre Channel and iSCSI
Dennis MartinDemartek President
Flash Memory Summit 2011Santa Clara, CA 1
Demartek Company Overview
Flash Memory Summit 2011Santa Clara, CA 2
Industry analysis with on-site test lab Lab includes servers, networking and storage
infrastructure Fibre Channel: 4 & 8 Gbps (16Gb soon) Ethernet: 1 & 10 Gbps (including NFS, CIFS, iSCSI & FCoE) Servers: 8+ cores, very large RAM Virtualization: ESX, Hyper-V, Xen
We prefer to run real-world applications to test servers and storage solutions Currently testing various SSD and FCoE implementations
Web: www.demartek.com
Demartek News
Flash Memory Summit 2011Santa Clara, CA 3
Demartek Deployment Guides Completed: iSCSI – May 31, 2011 In-progress: SAS, SSD & 16Gb Fibre Channel
Free Monthly Newsletter http://www.demartek.com/Newsletter/Newsletter_main.html Text “DemartekLabNotes” to 22828
Storage Interface Comparison http://www.demartek.com/Demartek_Interface_Comparison.html Internet search for “Storage Interface Comparison”
Latency is Important
Flash Memory Summit 2011Santa Clara, CA 4
OLTP applications require short response times (small latencies) Some transactions require several successive
queries in order to provide complete response HDDs have same maximum RPM (15000) as
a decade ago, limiting latency improvements RAID5 striping and latency
Latencies can actually increase with large disk groups! http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_latency
Host Storage Interfaces
Flash Memory Summit 2011Santa Clara, CA 5
The host storage system interface matters We compared the performance and latency of
the same storage system with multiple host interfaces
What kinds of performance and latency would you expect from these four host interfaces? 1Gb iSCSI, 10Gb iSCSI, 6Gb SAS, 8Gb FC
Evaluation Environment
Flash Memory Summit 2011Santa Clara, CA 6
Performance – Random IOPS
Flash Memory Summit 2011Santa Clara, CA 7
0
5,000
10,000
15,000
20,000
25,000
30,000
35,000
4KB rnd read 8KB rnd read 16KB rnd read32KB rnd read64KB rnd read 4KB rnd write 8KB rnd write 16KB rnd write32KB rnd write64KB rnd write
SQLIO - IOPS (Random)
Direct 6G SAS Switched 6G SAS Direct 8G FC Switched 8G FC
Direct 10G iSCSI Switched 10G iSCSI Direct 1G iSCSI Switched 1G iSCSI
Performance – Random MBPS
Flash Memory Summit 2011Santa Clara, CA 8
0
200
400
600
800
1000
1200
4KB rnd read 8KB rnd read 16KB rnd read 32KB rnd read 64KB rnd read 4KB rnd write 8KB rnd write 16KB rnd write32KB rnd write64KB rnd write
SQLIO - MBPS (Random)
Direct 6G SAS Switched 6G SAS Direct 8G FC Switched 8G FC
Direct 10Gb iSCSI Switched 10Gb iSCSI Direct 1Gb iSCSI Switched 1Gb iSCSI
Performance – Sequential IOPS
Flash Memory Summit 2011Santa Clara, CA 9
0
5,000
10,000
15,000
20,000
25,000
30,000
35,000
4KB seq read
8KB seq read
16KB seq read
32KB seq read
64KB seq read
128KB seq read
256KB seq read
4KB seq write
8KB seq write
16KB seq write
32KB seq write
64KB seq write
128KB seq write
256KB seq write
SQLIO - IOPS (Sequential)
Direct 6G SAS Switched 6G SAS Direct 8G FC Switched 8G FC
Direct 10G iSCSI Switched 10G iSCSI Direct 1G iSCSI Switched 1G iSCSI
Performance – Sequential MBPS
Flash Memory Summit 2011Santa Clara, CA 10
0
200
400
600
800
1000
1200
1400
4KB seq read
8KB seq read
16KB seq read
32KB seq read
64KB seq read
128KB seq read
256KB seq read
4KB seq write
8KB seq write
16KB seq write
32KB seq write
64KB seq write
128KB seq write
256KB seq write
SQLIO - MBPS (Sequential)
Direct 6G SAS Switched 6G SAS Direct 8G FC Switched 8G FC
Direct 10Gb iSCSI Switched 10Gb iSCSI Direct 1Gb iSCSI Switched 1Gb iSCSI
Performance – Latencies up to 5ms
Flash Memory Summit 2011Santa Clara, CA 11
0
10
20
30
40
50
60
70
80
90
100
0 1 2 3 4 5
Perc
enta
ge o
f Res
ults
Latencies (milliseconds)
Latencies - Up to 5 ms
Direct 6G SAS Switched 6G SAS Direct 8G FC Switched 8G FC
Direct 10G iSCSI Switched 10G iSCSI Direct 1G iSCSI Switched 1G iSCSI
Performance Summary
Flash Memory Summit 2011Santa Clara, CA 12
Performance rankings for this test:1. 6Gb SAS2. 8Gb FC (close second)3. 10Gb iSCSI4. 1Gb iSCSI
iSCSI imposes additional overhead, and even at 10Gb, has higher latency than 6Gb SAS and 8Gb FC
Commentary
Flash Memory Summit 2011Santa Clara, CA 13
Some real-world applications are moving towards larger I/O block sizes and more importance on bandwidth
Virtualized servers result in higher percentage of random I/O for storage systems
Deployment of SSDs can increase CPU utilization and network bandwidth needs
I believe that at the current rate of price decreases and capacity increases, SSDs (probably NAND flash) will become the new standard for tier-1 storage by 2012
Demartek SSD Resources
Flash Memory Summit 2011Santa Clara, CA 14
Demartek SSD Zone www.demartek.com/SSD.html
Demartek involved in ongoing real-world testing of SSDs
Contact me regarding our upcoming Deployment Guides (SAS, SSD, 16Gb FC)
Contact Information
Flash Memory Summit 2011Santa Clara, CA 15
(303) 940-7575www.demartek.com
http://twitter.com/DemartekYouTube: www.youtube.com/Demartek
Skype: Demartek
Dennis Martin, [email protected]
www.linkedin.com/in/dennismartin