Top Banner
TOP500 Supercomputers, July 2015 InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment
43
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: InfiniBand Growth Trends - TOP500 (July 2015)

TOP500 Supercomputers, July 2015

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment

Page 2: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 2

TOP500 Performance Trends

Explosive high-performance computing market growth

Clusters continue to dominate with 87% of the TOP500 list

Mellanox InfiniBand solutions provide the highest systems utilization in the TOP500

for both high-performance computing and clouds

76% CAGR

Page 3: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 3

InfiniBand is the de-facto Interconnect solution for High-Performance Computing• Positioned to continue and expand in Cloud and Web 2.0

InfiniBand connects the majority of the systems on the TOP500 list, with 257 systems• Increasing 15.8% year-over-year, from 222 system in June’14 to 257 in June’15

FDR InfiniBand is the most used technology on the TOP500 list, connecting 156 systems

EDR InfiniBand has entered the list with 3 systems

InfiniBand enables the most efficient system on the list with 99.8% efficiency – record!

InfiniBand enables the top 17 most efficient systems on the list

InfiniBand is the most used interconnect for Petascale systems with 33 systems

TOP500 Major Highlights

Page 4: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 4

InfiniBand is the most used interconnect technology for high-performance computing• InfiniBand accelerates 257 systems, 51.4% of the list

FDR InfiniBand connects the fastest InfiniBand systems• TACC (#8), NASA (#11), Government (#13, #14), Tulip Trading (#15), EDRC (#16), Eni (#17), AFRL (#19), LRZ (#20)

InfiniBand connects the most powerful clusters • 33 of the Petascale-performance systems

InfiniBand is the most used interconnect technology in each category - TOP100, TOP200, TOP300, TOP400• Connects 50% (50 systems) of the TOP100 while Ethernet connect no systems• Connects 55% (110 systems) of the TOP200 while Ethernet only 10% (20 systems)• Connects 55% (165 systems) of the TOP300 while Ethernet only 18.7% (56 systems)• Connects 55% (220 systems) of the TOP400 while Ethernet only 23.8% (95 systems)• Connects 51.4% (257 systems) of the TOP500, Ethernet connects 29.4% (147 systems)

InfiniBand is the interconnect of choice for accelerator-based systems• 77% of the accelerator-based systems are connected with InfiniBand

Diverse set of applications• High-end HPC, commercial HPC, Cloud and enterprise data center

InfiniBand in the TOP500

Page 5: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 5

Mellanox EDR InfiniBand is the fastest interconnect solution on the TOP500• 100Gb/s throughput, 150 million messages per second, less than 0.7usec latency• Introduced to the list with 3 systems on the TOP500 list

Mellanox InfiniBand is the most efficient interconnect technology on the list• Enables the highest system utilization on the TOP500 – 99.8% system efficiency • Enables the top 17 highest utilized systems on the TOP500 list

Mellanox InfiniBand is the only Petascale-proven, standard interconnect solution• Connects 33 out of the 66 Petaflop-performance systems on the list (50%)• Connects 2X the number of Cray based systems in the Top100, 5.3X in TOP500

Mellanox’s end-to-end scalable solutions accelerate GPU-based systems• GPUDirect RDMA technology enables faster communications and higher performance

Mellanox in the TOP500

Page 6: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 6

InfiniBand is the de-facto interconnect solution for performance demanding applications

TOP500 Interconnect Trends

Page 7: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 7

TOP500 Petascale-Performance Systems

Mellanox InfiniBand is the interconnect of choice for Petascale computing• Overall 33 system of which 26 Systems use FDR InfiniBand

Page 8: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 8

TOP100: Interconnect Trends

InfiniBand is the most used interconnect solution in the TOP100 The natural choice for world-leading supercomputers: performance, efficiency, scalability

Page 9: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 9

TOP500 InfiniBand Accelerated Systems

Number of Mellanox FDR InfiniBand systems grew 23% from June’14 to June’15EDR InfiniBand entered the list with 3 systems

Page 10: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 10

InfiniBand is the most used interconnect of the TOP100, 200, 300, 400 supercomputers Due to superior performance, scalability, efficiency and return-on-investment

InfiniBand versus Ethernet – TOP100, 200, 300, 400, 500

Page 11: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 11

TOP500 Interconnect Placement

InfiniBand is the high performance interconnect of choice• Connects the most powerful clusters, and provides the highest system utilization

InfiniBand is the best price/performance interconnect for HPC systems

Page 12: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 12

InfiniBand’s Unsurpassed System Efficiency

TOP500 systems listed according to their efficiency

InfiniBand is the key element responsible for the highest system efficiency

Mellanox delivers efficiencies of up to 99.8% with InfiniBand

Average Efficiency

• InfiniBand: 85%

• Cray: 74%

• 10GbE: 66%

• GigE: 43%

Page 13: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 13

InfiniBand connected systems’ performance demonstrate highest growth rate InfiniBand responsible for 2.6X the performance versus Ethernet on the TOP500 list

TOP500 Performance Trend

Page 14: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 14

InfiniBand Performance Trends

InfiniBand-connected CPUs grew 57% from June’14 to June‘15 InfiniBand-based system performance grew 64% from June’14 to June‘15 Mellanox InfiniBand is the most efficient and scalable interconnect solution Driving factors: performance, efficiency, scalability, many-many cores

Page 15: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 15

Proud to Accelerate Future DOE Leadership Systems (“CORAL”)

“Summit” System “Sierra” System

Paving the Road to Exascale

Page 16: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 16

Entering the Era of 100Gb/s (InfiniBand, Ethernet)

Copper (Passive, Active) Optical Cables (VCSEL) Silicon Photonics

36 EDR (100Gb/s) Ports, <90ns Latency

Throughput of 7.2Tb/s

Adaptive Routing, InfiniBand Router

Interconnect

Switch

Adapters 100Gb/s Adapter, 0.7us latency

150 million messages per second

(10 / 25 / 40 / 50 / 56 / 100Gb/s)

Switch

32 100GbE Ports, 64 25/50GbE Ports

(10 / 25 / 40 / 50 / 100GbE)

Throughput of 6.4Tb/s

Page 17: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 17

Interconnect Solutions Leadership – Software and Hardware

Comprehensive End-to-End Interconnect Software Products

Software and ServicesICs Switches/GatewaysAdapter Cards Cables/ModulesMetro / WAN

At the Speeds of 10, 25, 40, 50, 56 and 100 Gigabit per Second

Comprehensive End-to-End InfiniBand and Ethernet Portfolio

Page 18: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 18

End-to-End Interconnect Solutions for All Platforms

Highest Performance and Scalability for

X86, Power, GPU, ARM and FPGA-based Compute and Storage Platforms

Smart Interconnect to Unleash The Power of All Compute Architectures

x86Open

POWERGPU ARM FPGA

Page 19: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 19

HPC Clouds – Performance Demands Mellanox Solutions

San Diego Supercomputing Center “Comet” System (2015) to Leverage Mellanox Solutions and Technology to Build HPC Cloud

Page 20: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 20

Technology Roadmap – One-Generation Lead over the Competition

2000 202020102005

20Gbs 40Gbs 56Gbs 100Gbs

“Roadrunner”Mellanox Connected

1st 3rd

TOP500 2003Virginia Tech (Apple)

2015

200Gbs

Terascale Petascale Exascale

Mellanox

Page 21: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 21

“LENOX“, EDR InfiniBand connected system at the Lenovo HPC innovation center, Germany

EDR InfiniBand provides ~25% higher system performance versus FDR InfiniBand on Graph500 • At 128nodes

Lenovo Innovation Center - EDR InfiniBand System (TOP500)

EDRMellanox Connected

Page 22: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 22

Magic Cube II supercomputer #237 on the TOP500 list Sugon Cluster TC4600E system Mellanox end-to-end EDR InfiniBand

Shanghai Supercomputer Center – EDR InfiniBand (TOP500)

EDRMellanox Connected

Page 23: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 23

Mellanox Interconnect Advantages

Mellanox solutions provide a proven, scalable and high performance end-to-end connectivity

Standards-based (InfiniBand , Ethernet), supported by large eco-system

Flexible, support all compute architectures: x86, Power, ARM, GPU, FPGA etc.

Backward and future compatible

Proven, most used solution for Petascale systems and overall TOP500

Speed-Up Your Present, Protect Your FuturePaving The Road to Exascale Computing Together

Page 24: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 24

“Stampede” system 6,000+ nodes (Dell), 462462 cores, Intel Phi co-processors 5.2 Petaflops Mellanox end-to-end FDR InfiniBand

Texas Advanced Computing Center/Univ. of Texas - #8

PetaflopMellanox Connected

Page 25: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 25

Pleiades system SGI Altix ICE 20K InfiniBand nodes 3.4 sustained Petaflop performance Mellanox end-to-end FDR and QDR InfiniBand Supports variety of scientific and engineering projects

• Coupled atmosphere-ocean models• Future space vehicle design• Large-scale dark matter halos and galaxy evolution

NASA Ames Research Center - #11

PetaflopMellanox Connected

Page 26: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 26

“HPC2” system IBM iDataPlex DX360M4 NVIDIA K20x GPUs 3.2 Petaflops sustained Petaflop performance Mellanox end-to-end FDR InfiniBand

Exploration & Production ENI S.p.A. - #17

PetaflopMellanox Connected

Page 27: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 27

IBM iDataPlex and Intel Sandy Bridge 147456 cores Mellanox end-to-end FDR InfiniBand solutions 2.9 sustained Petaflop performance The fastest supercomputer in Europe 91% efficiency

Leibniz Rechenzentrum SuperMUC - #20

PetaflopMellanox Connected

Page 28: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 28

Tokyo Institute of Technology - #22

TSUBAME 2.0, first Petaflop system in Japan 2.8 PF performance HP ProLiant SL390s G7 1400 servers Mellanox 40Gb/s InfiniBand

PetaflopMellanox Connected

Page 29: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 29

“Cascade” system Atipa Visione IF442 Blade Server 2.5 sustained Petaflop performance Mellanox end-to-end InfiniBand FDR Intel Xeon Phi 5110P accelerator

DOE/SC/Pacific Northwest National Laboratory - #25

PetaflopMellanox Connected

Page 30: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 30

“Pangea” system SGI Altix X system, 110400 cores Mellanox FDR InfiniBand 2 sustained Petaflop performance 91% efficiency

Total Exploration Production - #29

PetaflopMellanox Connected

Page 31: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 31

Occigen system 1.6 sustained Petaflop performance Bull bullx DLC Mellanox end-to-end FDR InfiniBand

Grand Equipement National de Calcul Intensif - Centre Informatique National de l'Enseignement Suprieur (GENCI-CINES) - #36

PetaflopMellanox Connected

Page 32: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 32

“Spirit” system SGI Altix X system, 74584 cores Mellanox FDR InfiniBand 1.4 sustained Petaflop performance 92.5% efficiency

Air Force Research Laboratory - #42

PetaflopMellanox Connected

Page 33: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 33

Bull Bullx B510, Intel Sandy Bridge 77184 cores Mellanox end-to-end FDR InfiniBand solutions 1.36 sustained Petaflop performance

CEA/TGCC-GENCI - #44

PetaflopMellanox Connected

Page 34: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 34

IBM iDataPlex DX360M4 Mellanox end-to-end FDR InfiniBand solutions 1.3 sustained Petaflop performance

Max-Planck-Gesellschaft MPI/IPP - # 47

PetaflopMellanox Connected

Page 35: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 35

Dawning TC3600 Blade Supercomputer 5200 nodes, 120640 cores, NVIDIA GPUs Mellanox end-to-end 40Gb/s InfiniBand solutions

• ConnectX-2 and IS5000 switches

1.27 sustained Petaflop performance The first Petaflop systems in China

National Supercomputing Centre in Shenzhen (NSCS) - #48

PetaflopMellanox Connected

Page 36: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 36

“Yellowstone” system 1.3 sustained Petaflop performance 72,288 processor cores, 4,518 nodes (IBM) Mellanox end-to-end FDR InfiniBand, full fat tree, single plane

NCAR (National Center for Atmospheric Research) - #49

PetaflopMellanox Connected

Page 37: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 37

Bull Bullx B510, Intel Sandy Bridge 70560 cores 1.24 sustained Petaflop performance Mellanox end-to-end InfiniBand solutions

International Fusion Energy Research Centre (IFERC), EU(F4E) - Japan Broader Approach collaboration - #51

PetaflopMellanox Connected

Page 38: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 38

The “Cartesius” system - the Dutch supercomputer Bull Bullx DLC B710/B720 system Mellanox end-to-end InfiniBand solutions 1.05 sustained Petaflop performance

SURFsara - #59

PetaflopMellanox Connected

Page 39: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 39

Commissariat a l'Energie Atomique (CEA) - #64

Tera 100, first Petaflop system in Europe - 1.05 PF performance 4,300 Bull S Series servers 140,000 Intel® Xeon® 7500 processing cores 300TB of central memory, 20PB of storage Mellanox 40Gb/s InfiniBand

PetaflopMellanox Connected

Page 41: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 41

“Conte” system HP Cluster Platform SL250s Gen8 Intel Xeon E5-2670 8C 2.6GHz Intel Xeon Phi 5110P accelerator Total of 77,520 cores Mellanox FDR 56Gb/s InfiniBand end-to-end Mellanox Connect-IB InfiniBand adapters Mellanox MetroX long Haul InfiniBand solution 980 Tflops performance

Purdue University - #73

Page 42: InfiniBand Growth Trends - TOP500 (July 2015)

© 2015 Mellanox Technologies 42

“MareNostrum 3” system 1.1 Petaflops peak performance ~50K cores, 91% efficiency Mellanox FDR InfiniBand

Barcelona Supercomputing Center - #77

Page 43: InfiniBand Growth Trends - TOP500 (July 2015)

Thank You