REFERENCE GUIDE Mellanox 40 to 56Gb/s InfiniBand switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scal- able switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to 10’s of thousands of nodes. Mellanox’s scale-out 10 and 40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. Key Features - InfiniBand – 51.8Tb/s switching capacity – 100 to 300ns switching latency – Hardware-based routing – Congestion control Key Features - 10/40GbE – Linear scalability, non-blocking connectivity – High-density, lower power data center switch – Converged Enhanced Ethernet technology ConnectX ® Adapter Cards Why Mellanox? Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-proven product offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data center connectivity. Mellanox’s scale-out 10GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional 10GbE fabrics. Why up to 56Gb/s InfiniBand? Enables the highest performance and lowest latency – Proven scalability for 10s of thousands of nodes – Maximum return on investment Highest Efficiency / Maintains balanced system ensuring highest productivity – No artificial bottlenecks, performance match for PCIe 3.0 – Proven to fulfill multi-process networking requirements – Guaranteeing no performance degradation Performance driven architecture – MPI latency 1us, 6.6Gb/s with 40Gb/s InfiniBand (bi-directional) – MPI message rate of >40 Million/sec Superior application performance – From 30% to over 100% HPC applications performance increase – Doubles the storage throughput, reducing backup time in half Why Mellanox 10/40GbE? Mellanox’s Mellanox’s scale-out 10/40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics. From 10 and 40 Gigabit Ethernet Converged Network Adapters, core and top-of-rack switches and fabric optimization software, a broader array of end users with less rigid performance requirements than those addressed by InfiniBand can benefit from a more scalable and high performance Ethernet fabric. han traditional 10GbE fabrics. Why HP? HP isn’t just a reseller of Mellanox InfiniBand and Ethernet NICs, switches and software products described in this reference guide. HP also tests these components in HP systems and integrates them with related HP products in end-to-end HP solutions built at our four regional integration centers. HP provides Level 1, 2, and 3 Support for these products, and our worldwide HPC engineering team meets regularly with Mellanox counterparts to keep in synch with new product enhancements. This means that customers can procure and deploy HP/Mellanox systems globally and be confident of high quality, reliability and support from a single solution provider. Mellanox adapter cards are designed to drive the full performance of high-speed InfiniBand (up to 56Gb/s) and 10 and 40 Gigabit Ethernet fabrics. Mellanox ConnectX adapter cards deliver high bandwidth and industry-leading connectivity for performance-driven server and storage applications in Enterprise Data Centers, Web 2.0, High-Performance Computing, and Embedded environments. Clustered databases, web infrastructure, and high frequency trading are just a few example applications that will achieve significant throughput and latency improvements resulting in faster access, real-time response and increased number of users per server. Benefits – Industry-leading throughput and latency performance – Enabling I/O consolidation by supporting TCP/IP, FC over Ethernet and RDMA over Ethernet transport protocols on a single adapter – Improved productivity and efficiency – Supports industry-standard SR-IO Virtualization technology and delivers VM protection and granular levels of I/O services to applications – High-availability and high-performance for data center networking – Software compatible with standard TCP/UDP/IP and iSCSI stacks – High level silicon integration and no external memory design provides low power, low cost and high reliability Target Applications – Web 2.0 data centers and cloud computing – Low latency financial services – Data center virtualization – I/O consolidation (single unified wire for networking, storage and clustering) – Video streaming – Enterprise data center applications – Accelerating back-up and restore operations InfiniBand and 10/40 Gigabit Ethernet Switches
3
Embed
ConnectX Adapter Cards Why Mellanox? · computing and data center connectivity. Mellanox’s scale-out 10GbE products enable users to benefit from a far more scalable, lower latency,
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
REFERENCE GUIDE
Mellanox 40 to 56Gb/s InfiniBand switches deliver the highest performance and density with a complete fabric management solution to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity. Scal-able switch building blocks from 36 to 648 ports in a single enclosure gives IT managers the flexibility to build networks up to 10’s of thousands of nodes.
Mellanox’s scale-out 10 and 40GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional Ethernet fabrics.
Key Features - InfiniBand– 51.8Tb/s switching capacity – 100 to 300ns switching latency– Hardware-based routing– Congestion control
Key Features - 10/40GbE– Linear scalability, non-blocking connectivity– High-density, lower power data center switch– Converged Enhanced Ethernet technology
ConnectX® Adapter Cards Why Mellanox?Mellanox delivers the industry’s most robust end-to-end InfiniBand and Ethernet portfolios. Our mature, field-proven product offerings include solutions for I/O, switching, and advanced management software making us the only partner you’ll need for high-performance computing and data center connectivity. Mellanox’s scale-out 10GbE products enable users to benefit from a far more scalable, lower latency, and virtualized fabric with lower overall fabric costs and power consumption, greater efficiencies, and simplified management than traditional 10GbE fabrics.
Why up to 56Gb/s InfiniBand?Enables the highest performance and lowest latency– Proven scalability for 10s of thousands of nodes– Maximum return on investmentHighest Efficiency / Maintains balanced system ensuringhighest productivity– No artificial bottlenecks, performance match for PCIe 3.0– Proven to fulfill multi-process networking requirements– Guaranteeing no performance degradationPerformance driven architecture– MPI latency 1us, 6.6Gb/s with 40Gb/s InfiniBand (bi-directional)– MPI message rate of >40 Million/secSuperior application performance– From 30% to over 100% HPC applications performance increase– Doubles the storage throughput, reducing backup time in half
Why Mellanox 10/40GbE?Mellanox’s Mellanox’s scale-out 10/40GbE products enable users tobenefit from a far more scalable, lower latency, and virtualized fabricwith lower overall fabric costs and power consumption, greaterefficiencies, and simplified management than traditional Ethernetfabrics. From 10 and 40 Gigabit Ethernet Converged Network Adapters,core and top-of-rack switches and fabric optimization software, abroader array of end users with less rigid performance requirementsthan those addressed by InfiniBand can benefit from a more scalableand high performance Ethernet fabric.han traditional 10GbE fabrics.
Why HP?HP isn’t just a reseller of Mellanox InfiniBand and Ethernet NICs, switches and software products described in this reference guide. HPalso tests these components in HP systems and integrates them withrelated HP products in end-to-end HP solutions built at our fourregional integration centers. HP provides Level 1, 2, and 3 Support forthese products, and our worldwide HPC engineering team meetsregularly with Mellanox counterparts to keep in synch with newproduct enhancements. This means that customers can procure anddeploy HP/Mellanox systems globally and be confident of high quality,reliability and support from a single solution provider.
Mellanox adapter cards are designed to drive the full performance of high-speed InfiniBand (up to 56Gb/s) and 10 and 40 Gigabit Ethernet fabrics. Mellanox ConnectX adapter cards deliver high bandwidth and industry-leading connectivity for performance-driven server and storage applications in Enterprise Data Centers, Web 2.0, High-Performance Computing, and Embedded environments. Clustered databases, web infrastructure, and high frequency trading are just a few example applications that will achieve significant throughput and latency improvements resulting in faster access, real-time response and increased number of users per server.
Benefits– Industry-leading throughput and latency performance– Enabling I/O consolidation by supporting TCP/IP, FC over Ethernet
and RDMA over Ethernet transport protocols on a single adapter – Improved productivity and efficiency– Supports industry-standard SR-IO Virtualization technology and delivers VM protection and
granular levels of I/O services to applications – High-availability and high-performance for data center networking– Software compatible with standard TCP/UDP/IP and iSCSI stacks– High level silicon integration and no external memory design
provides low power, low cost and high reliabilityTarget Applications– Web 2.0 data centers and cloud computing– Low latency financial services– Data center virtualization – I/O consolidation (single unified wire for networking,
storage and clustering)– Video streaming – Enterprise data center applications– Accelerating back-up and restore operations
InfiniBand and 10/40 Gigabit Ethernet Switches
REFERENCE GUIDE
BladeServer Products HP Part # Mellanox Part #
HP BLc 4X FDR IB Switch 648312-B21 N/A-CustomHP IB QDR/EN 10Gb 2P 544M Adapter 644160-B21 N/A-CustomHP IB FDR/EN 10/40Gb 2P 544M Adapter 644161-B21 N/A-CustomHP IB 4X DDR Switch Module for HP BladeSystem c-Class 489183-B21 N/A - CustomHP IB 4X QDR Switch Module for HP BladeSystem c-Class 489184-B21 N/A - CustomHP IB 4X QDR ConnectX-2 Dual Port Mezz HCA for HP BladeSys-tem c-Class 592519-B21 N/A - Custom
HP IB 4X DDR ConnectX Dual Port Mezz HCA for HP BladeSystem c-Class 448262-B21 N/A - Custom
HP NC542m 10GbE ConnectX Dual Port Flex-10 NIC for BladeSystem c-Class 539857-B21 N/A - Custom
NIC and FlexLOM Products HP Part # Mellanox Part #