Nov 18 th 2013 Readers’ Choice: GENCI CURIE for DEUS (Dark Energy Universe Simulation) project on
Dec 05, 2014
Nov 18th 2013
Readers’ Choice: GENCI CURIE for DEUS (Dark Energy Universe Simulation) project
on
Nov 18th 2013
Readers’ Choice best server product or technology: Intel Xeon Processor
Editors’ Choice best server product or technology: Intel Xeon Phi Coprocessor
Readers’ Choice top product or technology to watch: Intel Xeon Phi Coprocessor
Nov 18th 2013
Bull Extreme Factory Remote Visualizer
3D Streaming Technology
Bull: from Supercomputers to Cloud Computing
Servers
• Full range development from
ASICs to boards, blades, racks
• Support for accelerators
Infrastructure
• Data Center design
• Mobile Data Center
• Water-Cooling
Expertise
& services
• HPC Systems Architecture
• Applications & Performance
• Energy Efficiency
• Data Management
• HPC Cloud
Software • Open, scalable, reliable SW
• Development Environment
• Linux, OpenMPI, Lustre, Slurm
• Administration & monitoring
bullx supercomputer suite
Energy Optimization Paths towards Exascale Computing
Supercomputer power efficiency
Microlectronics optimization
PicoWatts/operation
Environment PUE
Middleware resource utilization
Application new Paradigms
DLC blade system
DLC blade system
Brings (warm) water to the heart of the computing power
- All the benefits of bullx blades - PUE close to 1 - Standard servicing
DLC B710 blade
• Double blade hosting 2 nodes
• 2 x 2 Intel® Xeon® processors
E5 Family
• InfiniBand FDR – ready for EDR
• Standard CPUs, memory, disks
• As easy to maintain as an air-
cooled blade
DLC B710 blade
DLC B715 blade
2x Intel Xeon CPUs
2x Accelerators
(Intel® Xeon Phi™
or NVIDIA® Tesla™)
InfiniBand FDR
Liquid
Cooling
Cold plate
DLC blade system
Cooperative Power Chassis -Standard connection by the top of the rack physical separation of water and high voltage
Hydraulic chassis Temperature of water in customer loop - Rack inlet temperature : 35°C max
- Rack outlet temperature depends on rack consumption - Typically for 5 chassis : 42°C
Water flow needed per rack - Controlled according to rack consumption
- Maximum : 4.5 m3/h (75 l/min)
blade chassis (B500 series)
blade chassis (B500 series)
Ethernet Switch Board
Chassis
Management
Board
Hot- swappable
PSU x 4
blade chassis (B500 series)
B510 compute blade DDR III (x8)
2 x Intel Xeon
processors E5-2600
(v2-ready)
2 x
Fans HDD/SSD 2.5"
ConnectX3
FDR
2 compute nodes
B515 accelerator blade
• Double-width blade
• 2 x NVIDIA K20/K20X GPUs (Kepler)
OR
• 2 x Intel Xeon Phi coprocessors
• 2 x Intel Xeon E5-24xx
(ready for future generation)
• Dedicated PCI-e3 16x connection for each accelerator
• 2 InfiniBand FDR ports connected to 1st-level switch, so
that each accelerator has access to full FDR bandwidth
supercomputer suite
Operating System bullx Linux - Red Hat Entreprise Linux (RHEL) - SUSE Linux Entreprise Server (SLES)
Application
Management
Development
Environment
bullx DE
Execution
Environment
bullx BM
bullx MPI
Supercomputer
Management
Management Center
bullx MC
Software
Manager
Monitoring
& Control
Manager(Batch Management)
(Message Passing
Interface)
Data
Management
Parallel File System
bullx PFS
Network File System
NFS
Local File System
1
SIR
Data Management
Extended Offer
Application Management
Extended OfferLSF, PBS Pro, DDT,
Totalview...GPFS
1
Infrastructure Manager
1. Supercomputer Information Repository
(Lustre)
mobull, the plug and boot data center
• Up to 2 Petaflops per container
• Fast field deployment
• All-weather operation -30⁰ to +50⁰
• Secure
• Cost effective
• Hardware agnostic
• High density - innovative cooling system
• Pay As You Grow - Opex financed
The container solution from Bull
computer simulation in the age of cloud computing
Perform pay-per-use HPC on bullx solutions
No requirement for heavy investment
Set up and operated by Bull HPC experts
High level of service with total security
Web portal access for full HPC workflow
Data management
Job management
Licensing management
Remote 3D visualization
Accounting
HPC & Viz Portal « Appliance »
• Front-end for HPC private cloud
• Sold as a product
• Integrated solution (hardware +
software)
• Customers in production in many
countries
• Graphics customization framework
• Modular (HPC, VIZ, HPC+VIZ)
• Supports various job schedulers
• Supports various remote streamers
XRV is Bull’s client-server 3D
streaming technology
• Sold as a product
• Standalone or integrated with XCS
• Supports Windows and Linux clients
• Several sessions mapped on a single GPU
• Groundbreaking performance: Video
compression based on advanced algorithms,
requiring little bandwidth (3Mbit/s for
comfortable work at 1280x1024)
The objective of the Mont-Blanc project:
to design a new type of computer
architecture capable of setting future global
HPC standards that will deliver Exascale
performance while using 30 times less
energy.
This project is coordinated by the Barcelona
Supercomputing Center (BSC) and is partly funded
by the European Commission.
European Exascale computing
approach, based on embedded
power-efficient technology
www.montblanc-project.eu
European Technology Platform
for High Performance Computing
An industry-led forum founded by stakeholders of HPC technology supply and research
Horizon 2020
ERA
Strategic Research Agenda
European HPC Ecosystem Vitality
Calls For Proposals
HPC Usage Expansion
Extreme Scale Requirements
Ne
w H
PC
D
ep
loym
en
ts
HP
C S
tack
El
em
en
ts
HPC System Architecture
System Software and Management
Programming environment including support for extreme parallelism
HPC usage models Including: Big data, HPC in clouds
Usability
Affordability (cost, energy)
HPC services, including: ISV support
End-user support
SME focus
Education & training
Improve system and environment
characteristics including:
Energy efficiency
System resiliency
Balance compute subsystem, I/O and storage performance
http://www.etp4hpc.eu