OpenPOWER, a catalyst for Open Innovation€¦ · OpenPOWER Strategy Moore’s law no longer satisfies performance gain Numerous IT consumption models Growing workload demands Mature
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Cognitive development throughalliances with Universities and
institutions
Objectives of Collaboration
Aims to provide support to scientists and engineers to target the grand challenges facing society in the fields of energy and environment, health care and Big Data Processing in High Performance Computing using RISC based Architectures.
• Scale the applications in Science and Engineering towards the peta-flops range,
• Evaluate programming models
• Provide input to the design of future OpenPOWERtechnologies in the range of peta-flop to exa-flop systems and
• Create competence and knowledge for HPC application developers technology developers.
C-DAC’s approach is to explore best possible performance on OpenPOWER Systems and use different performance models to investigate scalability when using many nodes in a cluster mode.
C-DAC’s Collaboration with OpenPOWER Consortium members
•Conclusions
• OpenPOWER eco-system provides new opportunities for co-design and research on HPC architectures and technologies.
• OpenPOWER eco-systems are important to enable heterogeneous architectures required to meet power challenges of applications.
• Results benchmarking and testing the RISC based OpenPOWER Systems with IBM POWER8 System and NVIDIA GPUs (Tesla K40) with CentOS Linux latest release are positive, interesting and insightful. The Benchmarks exercise in the area of High Performance Computing and Big Data Processing framework has been carried out
• OpenPOWER meets a number of challenges of future peta-scale system challenges
Unprecedented performance and application gains with POWER8 NVLink—delivering >2.5X the CPU-GPU bandwidth
compared to x86 based systems
2 POWER8 CPUs and up to 4 Tesla P100 “Pascal” NVLink GPUs in a versatile 2U Linux server
CPU: GPU NVLink: not available on x86
Simpler programming: Access system memory with page migration to GPU control–users can utilize large (even
1TB+!) data sets and do so without manual data management
Water cooled option: improves data center efficiency and enabling CPU (Turbo)/GPU (Boost) performance to be
maintained at high levels for extended periods of time
IBM and Nvidia Collaboration Minsky (S822LC) for High Performance Computing
13
Introducing NVLink for bandwidth differentiation
The NVLink Difference Current CPU to GPU PCIe Attachment
System Bottleneck
Graphics
Memory
CPUDDR4
GPUGPU NVLink
115GB/s
80 GB/s
POWER8 with NVLink Technology
CPUDDR4
GPUP
CIe
x16
3.032
GB
/s
Graphics
Memory
Graphics
Memory
POWER8 with NVLinkdelivers >2.5X the
bandwidthPCIe Data Pipe
POWER8 with NVLink
Data Pipe
THE platform for applications utilizing CPU-GPU
bandwidth!
<77GB/s
Power Acceleration and Design Centre (PADC)- DACH
• Collaboration between
– Research Center Jülich
– NVIDIA Europe
– IBM R&D Labs in Böblingen and Zürich
• Mission Statement
– Support scientists & engineers to target the grand challenges facing society using OpenPOWER technologies
– Grand challenges:
• Energy & Environment, e.g. plasma physics
• Information, e.g. condensed matter physics
• Healthcare, e.g. genomics, brain research
– Create competence and knowledge for Application developers and Technology developers
• Status
– Announced Nov 2014. Technology & application workshop October 2015.
– 4x 824L w/ GPU cluster installed at FZ Juelich, 5x at IBM Boe
• Both systems w/ remote access and used by European application groups
POWER Acceleration & Design Center (PADC) - France• Collaboration among
– IBM Client Center Montpellier and Zürich Research Lab – NVIDIA– Mellanox
• Mission Statement– Expand the software ecosystem around OpenPOWER– Increase computational performance and energy efficiency– Advance the development of data-intensive research, industrial, and commercial
applications– Focus on direct customer porting and tuning
• Status– PADC announced July 2, 2015 in Montpellier– 2x 824L w/ GPU cluster w/ IB installed – 4-node Firestone w/ GPU cluster scheduled for 4Q2015– Remote access to large cluster in Poughkeepsie, IBM WW benchmark center– Contact Pascal Vezolle ([email protected]) for more details– Engage with your IBM client rep or business partner to begin