EUROTECH HPC AURORA DEPARTMENTAL HPC SYSTEMS
Nov 01, 2014
EUROTECH HPC
AURORA DEPARTMENTAL HPC SYSTEMS
Eurotech departmental HPC systemsThe Aurora G-Station and Cube
The Aurora G-Station and Aurora Cube are supercomputers in a box.
Equipped with the latest Intel and Nvidia processors, they offers extraordinary power to process the heaviest computational loads.
Industrial applications (EDA, CAE, CFD, signal processing...)
Digital media (rendering) Scientific computing computational finance, cyber
security, forensic... Multicore software development
Eurotech departmental HPC systemsBridge the HPC skills gap
The Aurora G-Station and Aurora Cube are appliance ready.
Eurotech can configure their departmental systems with all of the software (operating system and middleware) needed to make them “application” ready
Build appliances that fully leverage all Aurora benefits
Let users concentrate on their domain and core business
Provide HPC ready to go solutions that bridge the HPC skills shortage
Eurotech departmental HPC systems Compact and powerful
Features and value
Accelerate applications Full supercomputing functionality in a box High speed interconnectsNo fans, no noise Water cooling allows silent operationsEasy to deploy No need for a controlled environment Same installation complexity of a split air conditionerSave space Fit under a desk up to 384 CPU coresSave energy Based on the most efficient architecture in the world
(over 3.4 GFlop/s per Watt)Reliable Leverages Eurotech competences in making bomb
proof devices Effective workstation replacement Support for remote visualization
Eurotech departmental HPC systems Energy efficiency
Save more than 50% of energy
Limit air conditioningLimit fans
Best energy efficiencyWater cooling
Example: an 8 kW installationAn installation of 8kW of IT equipment and Eurotech cooling can save Between 1.6kW and 2.4 kW for better energy efficiency (no fans, liquid
cooling, energy efficient design…) Between 5 to 7 kW of air conditioningCompared to a standard air cooled solution based on clusters or workstationsThis means more than 30,000 € for 4 years hardware lifespan
Departmental HPC systems - Architecture
Full-fledged HPC systems in a box Up to 26 TFlop/s 8 or 16 water cooled double socket
blades Infiniband interconnects
HPC SystemNetworked G-Stations or
Cube (Infiniband or Ethernet)
Management nodePower supply
SwitchesMonitoring system
Storage
Aurora HPC-20-50, 8 or 16 slots chassis with
backplane
Aurora Direct Hot Liquid Cooling
Aurora BOARD node card
2 x Intel Xeon E5 v22x Tesla GPU
2x Intel Phi
Departmental HPC systems - Architecture
The Aurora G-StationLeverage acceleration
Performance Up to 26 Tflop/s per rack (peak)
Processor 2 X Intel Xeon E5 v2 series per node
Accelerators 2 X Nvidia Tesla K20 – K20x –K40 per node OR
2 X Intel Xeon Phi 5120D per node
Architecture Standard configuration :1 Aurora HPC-20-50 chassisUp to 8 Aurora BOARD 20-30 blades1 Aurora Root Card (switching and control unit)1 front node: Eurotech Antares ICE server (Intel Core i7, 8 GB memory, 500 GB HD)1 Cisco Layer-3 switchOptionsOptional Nvidia K1/K2 cardsStorage (Infiniband or Ethernet)
Memory Up to 64 GB (128 GB) soldered RAM per node ECC DDR3 SDRAM 1866 MT/s
Interconnects 40 Gbps QDR Infiniband
Cooling Aurora Direct Hot Liquid Cooling (external or embedded heat exchanger configurations)
Power 8 kW per rack (peak, fully loaded)
Power plug Single phase 16/32 A or tri-phase 16/32 A
Weight Integrated 120 kg (396 pounds) (fully loaded)Split 70 Kg (110 pounds)
Local storage up to 2 TB GB 2,5" Sata Disk per nodeUp to 512 GB 1,8" micro SATA SSD per node
Fast Storage Optional Infiniband storage
Environmentaltal
Storagetal
Specificationstal
Interfacestal I/O ports 16 x 40 Gbps QDR Infiniband
3 x 1Gbs Ethernet4 x USB1 x standard VGA
Cooling Aurora Direct Hot Liquid Cooling (external or embedded heat exchanger configurations)
Power 8 Kw per rack (peak, fully loaded) Power plug Single phase 16/32 A or tri-phase 16/32 AWeight Integrated 120 kg (396 pounds) (fully loaded)
Split 70 Kg (110 pounds)
The Aurora CubePower and versatility
Performance Up to 8 Tflop/s per rack (peak)
Processor 2 X Intel Xeon E5 v2 series per node
Architecture Standard configuration :1 Aurora HPC-20-50 chassisUp to 8 Aurora BOARD 20-30 blades1 Aurora Root Card (switching and control unit)1 front node: Eurotech Antares ICE server (Intel Core i7, 8 GB memory, 500 GB HD)1 Cisco Layer-3 switchOptionsOptional Nvidia K1/K2 cardsStorage (Infiniband or Ethernet)
Memory Up to 64GB (128GB) soldered RAM per node ECC DDR3 SDRAM 1866 MT/s
Interconnects 40 Gbps QDR Infiniband
Local storage up to 2 TB GB 2,5" Sata Disk per node
Up to 512 GB 1,8" micro SATA SSD per node
Fast Storage Optional Infiniband storage
Environmentaltal
Storagetal
Specificationstal
Interfacestal
I/O ports 16 x 40 Gbps QDR Infiniband3 x 1Gbs Ethernet4 x USB1 x standard VGA
The Aurora G-Station and CubeExternal water cooling
DIMENSIONSComputational unitH 65cm, W 60cm, D 75cmH 26", W 25", D 30"Cooling unitH 80cm, W 100cm, D 25cmH 31.5", W 39.4", D 9.8"
EXTERNAL COOLING OPTIONCaptures all Aurora advantages of silence, no fans, energy efficiency and compactness
The Aurora hot direct water cooling
From heat exchanger to distribution bars
From distribution bars to backplane
From backplane to cold plates
Cold plates directly cool components
Cooling unit Backplane Chassis Nodes
InstallationExternal water cooling: same installation of an air conditioner
The Aurora HPC workgroup solutionsSoftware stack
THANK YOUAurora supercomputers