REFERENCE GUIDE ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards Why Mellanox? Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-proven product offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data center connectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management providing the best return-on- investment. Why FDR 56Gb/s InfiniBand? Enables the highest performance and lowest latency – Proven scalability for tens-of-thousands of nodes – Maximum return-on-investment Highest efficiency / maintains balanced system ensuring highest productivity – Provides full bandwidth for PCI 3.0 servers – Proven in multi-process networking requirements – Low CPU overhead and high sever utilization Performance driven architecture – MPI latency 0.7us, >12GB/s with FDR 56Gb/s InfiniBand (bi- directional) – MPI message rate of >90 Million/sec Superior application performance – From 30% to over 100% HPC application performance increase – Doubles the storage throughput, reducing backup time in half What is FDR10 InfiniBand? FDR10 InfiniBand is a Mellanox proprietary protocol similar in format to FDR but running at a speed identical to 40Gb/s Ethernet. FDR10 supports InfiniBand at true 40Gb/s line speeds and FEC while taking advantage of mid-planes, connectors, PCB materials and cables designed for 40Gb/s Ethernet. InfiniBand Market Applications InfiniBand is increasingly becomes an interconnect of choice in not just high-performance computing environments, but also in main- stream enterprise grids, data center virtualization solutions, storage, and embedded environments. The low latency and high-performance of InfiniBand coupled with the economic benefits of its consolida- tion and virtualization capabilities provides end-customers the ideal combination as they build out their applications. Why Mellanox 10/40GbE? Mellanox’s scale-out 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Utilizing 10 and 40GbE NICs, core and top-of-rack switches and fabric optimization software, a broader array of end-users can benefit from a more scalable and high-performance Ethernet fabric. Mellanox adapter cards are designed to drive the full performance of PCIe 2.0 and 3.0 I/O over high- speed 56Gb/s FDR, 40Gb/s FDR10 InfiniBand and 10/40GbE fabrics. ConnectX InfiniBand and Ethernet adapters lead the market in performance, throughput, power and lowest latency. ConnectX adapter cards provide the highest performing and most flexible interconnect solution for data centers, high- performance computing, Web 2.0, cloud computing, financial services and embedded environments. Key Features – 0.7us application to application latency – 40 or 56Gb/s InfiniBand ports – 10 or 40Gb/s Ethernet Ports – PCI Express 3.0 (up to 8GT/s) – CPU offload of transport operations – End-to-end QoS & congestion control – Hardware-based I/O virtualization – TCP/UDP/IP stateless offload Key Advantages – World-class cluster performance – High-performance networking and storage access – Guaranteed bandwidth & low-latency services – Reliable transport – End-to-end storage integrity – I/O consolidation – Virtualization acceleration – Scales to tens-of-thousands of nodes Mellanox 40 and 56Gb/s Infiniband switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scalable switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to tens-of-thousands of nodes. Key Features – 72.5Tb/s switching capacity – 100ns to 510ns switching latency – Hardware-based routing – Congestion control – Quality of Service enforcement – Up to 6 separate subnets – Temperature sensors and voltage monitors Key Advantages – High-performance fabric for parallel computation or I/O convergence – Wirespeed InfiniBand switch platform up to 56Gb/s per port – High-bandwidth, low-latency fabric for compute-intensive applications InfiniBand and Ethernet Switches X X ® S w tch 3828RG Rev 1.0 Mellanox’s scale-out 10 and 40 Gigabit Ethernet switch products offer the industry’s highest density Ethernet switching products. Offering a full product portfolio of top-of-rack 1U Ethernet switches for 10 or 40Gb/s Ethernet ports to the server or to the next level of switching. These switches enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Key Features – Up to 36 ports of 40Gb/s non-blocking Ethernet switching in 1U – Up to 64 ports of 10Gb/s non-blocking Ethernet switching in 1U – 230ns-250ns port to port low latency switching – Low power Key Advantages – Optimal for dealing with data center east-west traffic computation or I/O convergence – Highest switching bandwidth in 1U – Low OpEx and CapEx and highest ROI Dell Sales Contact: [email protected]OEM BDM Ronnie Payne 512-201-3030 [email protected]Technical Sales Rep Will Stepanov 512-966-4993 [email protected]
4
Embed
ConnectX FDR10 InfiniBand and 10/40GbE Adapter … · ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards Why Mellanox? ... you’ll need for high-performance computing and data
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
REFERENCE GUIDE
ConnectX FDR10 InfiniBand and 10/40GbE Adapter Cards Why Mellanox?Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-provenproduct offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data centerconnectivity. Mellanox’s scale-out FDR 56Gb/s InfiniBand and10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management providing the best return-on-investment.
Why FDR 56Gb/s InfiniBand?Enables the highest performance and lowest latency– Proven scalability for tens-of-thousands of nodes– Maximum return-on-investmentHighest efficiency / maintains balanced system ensuring highest productivity– Provides full bandwidth for PCI 3.0 servers– Proven in multi-process networking requirements– Low CPU overhead and high sever utilizationPerformance driven architecture – MPI latency 0.7us, >12GB/s with FDR 56Gb/s InfiniBand (bi-
directional)– MPI message rate of >90 Million/sec Superior application performance– From 30% to over 100% HPC application performance increase– Doubles the storage throughput, reducing backup time in half
What is FDR10 InfiniBand? FDR10 InfiniBand is a Mellanox proprietary protocol similar in format to FDR but running at a speed identical to 40Gb/s Ethernet. FDR10 supports InfiniBand at true 40Gb/s line speeds and FEC while taking advantage of mid-planes, connectors, PCB materials and cables designed for 40Gb/s Ethernet.
InfiniBand Market ApplicationsInfiniBand is increasingly becomes an interconnect of choice in not just high-performance computing environments, but also in main-stream enterprise grids, data center virtualization solutions, storage, and embedded environments. The low latency and high-performance of InfiniBand coupled with the economic benefits of its consolida-tion and virtualization capabilities provides end-customers the ideal combination as they build out their applications.
Why Mellanox 10/40GbE?Mellanox’s scale-out 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Utilizing 10 and 40GbE NICs, core and top-of-rack switches and fabric optimization software, a broader array of end-users can benefit from a more scalable and high-performance Ethernet fabric.
Mellanox adapter cards are designed to drive the full performance of PCIe 2.0 and 3.0 I/O over high-speed 56Gb/s FDR, 40Gb/s FDR10 InfiniBand and 10/40GbE fabrics. ConnectX InfiniBand and Ethernet adapters lead the market in performance, throughput, power and lowest latency. ConnectX adapter cards provide the highest performing and most flexible interconnect solution for data centers, high-performance computing, Web 2.0, cloud computing, financial services and embedded environments.
Key Features– 0.7us application to application latency – 40 or 56Gb/s InfiniBand ports – 10 or 40Gb/s Ethernet Ports – PCI Express 3.0 (up to 8GT/s)– CPU offload of transport operations – End-to-end QoS & congestion control – Hardware-based I/O virtualization – TCP/UDP/IP stateless offload
Key Advantages– World-class cluster performance – High-performance networking and storage access – Guaranteed bandwidth & low-latency services – Reliable transport – End-to-end storage integrity – I/O consolidation – Virtualization acceleration – Scales to tens-of-thousands of nodes
Mellanox 40 and 56Gb/s Infiniband switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scalable switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to tens-of-thousands of nodes.
Key Features– 72.5Tb/s switching capacity – 100ns to 510ns switching latency – Hardware-based routing – Congestion control – Quality of Service enforcement – Up to 6 separate subnets – Temperature sensors and voltage
monitors
Key Advantages– High-performance fabric for
parallel computation or I/O convergence
– Wirespeed InfiniBand switch platform up to 56Gb/s per port
– High-bandwidth, low-latency fabric for compute-intensive applications
InfiniBand and Ethernet Switches
XX®
Sw tch
3828RG Rev 1.0
Mellanox’s scale-out 10 and 40 Gigabit Ethernet switch products offer the industry’s highest density Ethernet switching products. Offering a full product portfolio of top-of-rack 1U Ethernet switches for 10 or 40Gb/s Ethernet ports to the server or to the next level of switching. These switches enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics.
Key Features– Up to 36 ports of 40Gb/s non-blocking Ethernet
switching in 1U– Up to 64 ports of 10Gb/s non-blocking Ethernet
switching in 1U– 230ns-250ns port to port low latency switching– Low power
Key Advantages– Optimal for dealing with data center east-west
traffic computation or I/O convergence – Highest switching bandwidth in 1U – Low OpEx and CapEx and highest ROI
A3993556 MIS5001QC 18 port QDR Leaf Blade for MIS5X00 Chassis Switch Support incl in base chassis switch above.A3993683 MIS5600MDC Management module for MIS5X00 Chassis Switch
InfiniBand to Ethernet Gateway Systems
A4058896 MBX5020-1SFR QDR/10GbE BridgeX IB to EN Gateway, 4 QDR ports and 12 SFP+ 1/10GbE ports, 1U A5379673
A4785818 VLT-30034 Grid Director 4036E IB to EN Gateway, 34 QDR ports with 2 1/10GbE ports, 1U A5379670
A6747366 LIC-6036-GWFDR/40GbE and/or 10GbE
L2 + L3 Ethernet + Gateway software license for Mellanox 6036 Series SwitchA6747363
Software
Unified Fabric Manager (UFM) Packages
A5362972 S_W-00137 UFM STANDARD LICENSE FOR 1 MANAGED NODE (up to 16 cores) A5307677
A5216210 S_W-00133 UFM ADVANCED LICENSE FOR 1 MANAGED NODE (up to 16 cores) A5379544
Software Host Accelerators
A4995874 SWL-00400 VMA license per server (2 CPU Sockets) A5379722
REFERENCE GUIDE
Mellanox Product Details - S & P
Dell SKU OPN Component Description 3yr Silver Support
Cables
Copper Cables, Passive with QSFP Connectors
A5264855 MC2206130-001
QDR/FDR10
Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 1m
A5296015
A5058556 MC2206130-002 Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 2m
A5058557 MC2206130-003 Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 3m
A5145715 MC2206128-004 Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 4m
A5319885 MC2206128-005 Mellanox copper cable, up to IB QDR/FDR10 (40Gb/s), 4X QSFP, 5m
A5715909 MC2206126-006QDR
Mellanox copper cable, up to IB QDR (40Gb/s), 4X QSFP, 6m
A5601742 MC2206125-007 Mellanox copper cable, up to IB QDR (40Gb/s), 4X QSFP, 7m
Customer KitConnectX-3 dual port FDR 56Gb/s Blade Mezzanine Adapter
430-4669 MCX380A-FCAA Factory Install
430-4833 MCX380A-TCAAFDR10
Customer KitConnectX-3 dual port FDR10 Blade Mezzanine Adapter
430-4834 MCX380A-TCAA Factory Install
430-4672 MCX380A-QCAAQDR
Customer KitConnectX-3 dual port QDR 40Gb/s Blade Mezzanine Adapter
430-4670 MCX380A-QCAA Factory Install
430-3799 MCX380A-QCAAQDR
Customer KitConnectX-2 dual port QDR 40Gb/s Blade Mezzanine Adapter
430-3804 MCX380A-QCAA Factory Install
Switches
225-2438 M4001FFDR
Customer KitSwitchX single width FDR Infiniband 56Gb/s Blade Switch
225-2439 M4001F Factory Install
225-3702 M4001TFDR10
Customer KitSwitchX single width FDR10 Infiniband 40Gb/s Blade Switch
225-3703 M4001T Factory Install
225-2441 M4001QQDR
Customer KitSwitchX single width QDR Infiniband 40Gb/s Blade Switch
225-2442 M4001Q Factory Install
224-4640 M3601QQDR
Customer KitInfiniScale IV double width QDR Infiniband 40Gb/s Blade Switch
224-4642 M3601Q Factory Install
PTM (Pass Through Module) for Dell M1000E Blade System
331-0439
M1601P 10GbE
Customer Kit10GbE (XAUI) 16-port pass through module
Configure in Dellstar Factory Install
331-2498 Customer Kit10GbE (KR) 16-port pass through module
Configure in Dellstar Factory Install
Professional Services
A5456157 GPS-00010 Project-based on site support person per day
A5254787GPS-03003
3 days (1 person) SOW services for On-Site Network bring up, HW and SW Install and config., Fabric health check, Best practices & knowledge transfer, Travel & expense included, Cabling (2 people min).
A5254788GPS-03005
5 days (1 person) SOW services for On-Site Network bring up, HW and SW Install and config., Fabric health check, Best practices/knowledge transfer, Travel/expense included, Cabling (2 people min).