© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Jan 23, 2015
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
State of the Union
Fibre Channel Over Ethernet
Steve ChalmersJune 2012
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.3
FCoE: Converged Network
Save money (CapEx and OpEx) Reduce complexity at server edge
Let’s remember to:• Focus on solving real-world problems• Avoid convergence for convergence sake,
where net results could increase cost and complexity
ConvergedSeparate Networks
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.4
Technology Hype Cycle
Technology trigger
Peak of inflated
expectationsPlateau of productivity
Trough of disillusionment
Time
FCoEend to
end FCoEserver edge
Gartner placed FCoE here August
2011
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.6
Industry’s only architecture converging data center, campus, branch
FlexNetwork Architecture
Open Scalable Secure Agile Consistent
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.8
Key Components
Server Network Storage
* Core SAN switches needed
• FCoE-enabled 10 GbE switch acts as FCoE gateway between FC and Ethernet network
• Replace need for FC TOR switches*
• Natively connects to FC
• No disruption to current LAN/SAN management practices or roles
• Converge traffic at the server over 10 GbE CNA
• Replaces multiple low-performance NICs, HBAs
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.9
HP BladeSystem c-Class Enclosure
NICsHBAs
Add 10 GbE network and storage access to blade servers without disrupting existing infrastructure
Blade Servers FCoE Deployment with HP Virtual Connect
FC
High-speed transparent
connection to any SAN
Ethernet
Existing servers
with multiple interface
cards, cables and
transceivers
Virtual ConnectFlexFabric Module
New Server Blades with FlexibleLOM FlexFabric Adapters
iSCSI or NAS storage
Ethernet
FC storage
FibreChannel
10GbE8Gb
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.10
NICs
HBAs
FC
High-speed transparent
connection to any SAN
Ultimate future-proofing AND investment protection
Continue to use existing infrastructure Add 10 GbE /CNA servers access to Fibre Channel SANs
ToR FCoE Deployment With HP 5820
DCB/ FCoE
Ethernet
HP 582010GbE Switch
Existingservers with HBAs & NICs
iSCSI or NAS storage
Ethernet
New Rack servers with CNAs – $$$ savings
CNAs
FCoE Module
FC storage
FibreChannel
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.11
FCoE: Edge for Fibre Channel (1 hop)
FCoE: Edge Network (2 hops)
FCoE: Across the Data Center Backbone
Saves cost of FC HBAs, slots to put HBAs in, and FC edge switch ports
Flexibility on FCF location offers cost savings in additional cases
If it’s not simple and clear, cost savings are not outlined, or it locks you in with a vendor, … there’s no benefit!
Standards are clear and widely implemented
Congestion management and security are not fully standardized
Emerging FC-BB-6 standard changes the approach noticeably compared both to the original FC-BB-5 plan and to the proprietary approaches offered by various vendors today
Mainstream use Early adopters Technology evaluation
Fibre Channel over Ethernet (FCoE) Today
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.12
iSCSI: Light TrafficUsing software driver
over regular NIC
iSCSI: Heavy trafficDedicated network for
storage
Converged Network, Heavy iSCSI Traffic Together with LAN
Simple, unified and no need for FC
No need for FCConverged network
Standards are clear and widely implemented
Standards are clearCustomers who adopted iSCSI have generally chosen this approachFibre Channel is more popular in the Enterprise
Exploration of iSCSI over lossless Ethernet (DCB)Although TCP does the job, not all applications can tolerate a two-second TCP timeout Proper congestion management plan is necessary
Mainstream use Subset of customers Technology evaluation
iSCSI Today
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technologies and Standards
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.14
Storage Networking Taxonomy
Storage
Shared Storage Server’s Local Storage
File Block
NFS SMB FibreChannel
iSCSI FCoE
Block
SimpleDisk
LocalRAID
SharedZonedSASNFS/
RDMA
SMB2direct (RDMA)
PCIeSSD
RDMA for low latencyshared SSD access?
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.15
Converged Networking HistoryEthernet historically carried “light” storage traffic• Heavy storage traffic was always on a special cable or network
Single Data Center Network - repeated attempts to converge• Fibre Channel, InfiniBand and Ethernet/iSCSI all tried
• None gained enough momentum to win in the data center
• Server interfaces, coexistence of very different traffic, congestion management were technical issues
FCoE is another attempt to converge• Migrate Fibre Channel traffic to Ethernet so the Fibre Channel network can be retired
100M 1G 10G 40G 100G?
1994 1996 1998 2000 20042002 2006 20102008 2012 2014 2016
Enet(Ethernet)
1G 2G 4G 16GFC(Fibre Channel)
32G?8G
DCBiSCSI
FCoE
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.16
Converged Network StandardsFCOE
ETS
QCN
PFC
EmergingStandards
DCB
FC
oE
LA
NDCBX
Carefully prioritizeoutbound packets from a single port
Fibre Channel encapsulationover Ethernet
Ports agree which of these features
they will use, and how
Avoid packet lossby stopping trafficso buffers aren’t
overrun
FC-BB-5 today (mostly for the edge)FC-BB-6 emerging (end to end)
Avoid traffic jamcaused by PFC
by telling sendersto slow down as
buffers fill
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.17
Open w/controller
• OpenFlow ideal
• Switches are just data plane, all control plane is in a central controller
• “Small matter of software”
Data Center Network Models
Optimized
IRFQFabricFabricPath
• State and forwarding table calculation is centralized, data plane inefficiencies can be removed
• End-to-end FCoE fits as an overlay network, central or distributed control plane
. . .
Traditional
• Inefficient: converging to new forwarding tables on change takes too long, packet processing repeated
• Overlaying the FCoE FC-BB-6 end-to-end model forces a closed system
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.18
FC-BB-6 (Next Generation FCoE)
Server
Ethernet
Layer 2 DCB
FCFFibre
channelDiskarray
FC-BB-5(existingstandard)
Single VendorOpen, Multi VendorFibre Channel
Forwarder
DCB Ethernet switches
with FCF controlServer
Diskarray
FDF(opt
)
FDF(opt)
FC-BB-6(work inprocess)
Single Vendor
An FC-BB-6 FCF runs the FC switch software stack, & forwards FCoE traffic
Port expandercontrolled bya single FCF
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.19
Congestion
ServerDiskarray
ToR ToR
Chip view
Backbone chassis switch
Linecard
Linecard
Fabric card
NIC NICSW SWSW SWSW
Box view
Common data center switch ASIC• About a terabit per second• About a billion packets per second• Has to put a packet in memory,
andget another one back, about once a
nanosecond to keep up
• Has enough memory to hold lessthan a tenth of a millisecond worthof data passing through
• Must drop any packet it doesn’t havespace for, and decide that quickly
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.20
Congestion
ServerDiskarray
ToR ToR
Chip view
Backbone chassis switch
Linecard
Linecard
Fabric card
NIC NICSW SWSW SWSW
Box view
What if the disk array port gets busy?• If all traffic is TCP, buffers will fill,
packets drop, TCP adjusts send rate• If all traffic is FCoE lossless, buffers
will fill, DCB PFC pause issued
• If traffic is a mix, they compete for buffer space. It may not be possible to tune buffer allocation for optimal performance
• FCoE traffic bursts start and stop 1000x faster than TCP response time. Unstable behavior? We don’t know…
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.21
Congestion
ServerDiskarray
TOR TOR
Chip view
Backbone chassis switch
Linecard
Linecard
Fabric card
NIC NICSW SWSW SWSW
Box view
Other choices?• Use a deep buffered switch for the
backbone chassis• Takes 20x as long to write every
packet to DRAM and then read it back to send
• Takes >10x as many ASICs and very high cost
• When shared solid state disk (comparable to PCIe cards today) arrives, switch will be significant share of storage access latency
DRAM DRAM
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.22
Data Center Networking Continues to Grow & Evolve
• Interesting new technologies are brought to market every year
• Proprietary data center (actually, data center fault containment zone) architectures are being proposed as alternatives to traditional approaches
• The way FC-BB-6 is architected will favor such designs (perhaps not today’s)
• There are several basic unsolved problems in this space (I want zero packet drops, no congestion performance collapse, low cost, and low latency all at the same time)
• Buy what you need for today, not “the architecture for the next decade”
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.23
Looking AheadFile based access will likely increase share• SMB2 direct (CIFS/RDMA) may drive the shift from block to file
which NFS/RDMA did not• Industry should make a choice between RoCE, iWARP, and
InfiniBand for RDMA
Shared block storage means more choices• Fibre Channel will continue to be used for at least a decade by
mainstream customers− Includes Fibre Channel backbone with FCoE edge connections
• Don’t yet understand how to network solid state disks at their inherent latency
• Emerging competition between closed end-to-end data center networks; FCoE FC-BB-6 is closed
Local storage• Traditional single server RAID with internal disks• PCI Express solid state disk drives will continue to mainstream• Leading edge: distribute the storage among the servers running
the application, as Hadoop does
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.24
Closing Thoughts
Stay focused on needs and goals− If your goal is reliable service at a reasonable cost, don’t
deploy an early expensive converged network− Do not deploy FCoE end-to-end if you want or need to keep
your Ethernet open and multivendor; FCoE at the server edge is fine
Consider all storage options− FCoE is only one choice; Fibre Channel will be around a long
time− File storage over Ethernet and iSCSI are also mainstream
choices
Storage does demand a lot of a network− Coexistence of LAN traffic with sporadic storage traffic
requires strong end-to-end congestion management or a very restricted topology, like Fibre Channel
− Very different and in many ways more difficult than the demands of VOIP
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.25
Tools to Help Our Clients • Read about the FlexNetwork Architecture
• Learn about Virtual Application Networks
• Discover Intelligent Management Center
• Read more on FlexFabric
• See more about FlexCampus BYOD for education and healthcare
• Learn how to simplify communication with FlexBranch
• View the HPN Portfolio Matrix Guide
• Learn about networking services from HP Technical Services
• Learn about networking career certifications from HP ExpertONE
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thank you