NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable Energy, LLC. US Trends in Data Centre Design with NREL Examples of Large Energy Savings Understanding and Minimising The Costs of Data Centre Based IT Services Conference University of Liverpool O?o Van Geet, PE June 17, 2013
55
Embed
US Trends in Data Centre Design with NREL Examples of Large Energy Savings
Summary Background Information Technology Systems Environmental Conditions Air Management Cooling Systems Electrical Systems Other Opportunities for Energy Efficient Design Data Center Metrics & Benchmarking
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable Energy, LLC.
US Trends in Data Centre Design with NREL Examples of Large Energy Savings
Understanding and Minimising The Costs of Data Centre Based IT Services Conference
University of Liverpool O?o Van Geet, PE June 17, 2013
2
0"
5"
10"
15"
20"
25"
30"
35"
40"
1.00"
1.03"
1.06"
1.09"
1.12"
1.15"
1.18"
1.21"
1.24"
1.27"
1.30"
1.33"
1.36"
1.39"
1.42"
1.45"
1.48"
1.51"
1.54"
1.57"
1.60"
1.63"
1.66"
1.69"
1.72"
1.75"
1.78"
1.81"
1.84"
1.87"
1.90"
1.93"
Cost%in%M
illions%of%D
ollars%
P.U.E.%
Total%Annual%Electrical%Cost:%Compute%+%Facility%
2
Assume ~20MW HPC system & $1M per MW year utility cost
Facility
HPC
Cost and Infrastructure Constraints
3
BPG Table of Contents
• Summary • Background • Informa?on Technology
Systems • Environmental CondiLons • Air Management • Cooling Systems • Electrical Systems • Other Opportuni?es for
Energy Efficient Design • Data Center Metrics &
Benchmarking
4
CPUs ~65C (149F)
GPUs ~75C (167F) Memory
~85C (185F)
CPU, GPU & Memory, represent ~75-‐90% of heat load …
s
s
Safe Temperature Limits
5
Data Center equipment’s environmental condiLons should fall within the ranges established by ASHRAE as published in the Thermal Guidelines book.
Zone5: Evap. Cooler + Outside Air 6055 417 1656 99
Zone6: Outside Air Only 994 0 4079 0
Zone7: 100% Outside Air 790 0 2509 0
Total 8,760 538 8,760 160
Es0mated % Savings -‐ 95% -‐ 98%
9
Data Center Efficiency Metric
• Power Usage EffecLveness (P.U.E.) is an industry standard data center efficiency metric.
• The raLo of power used or lost by data center facility infrastructure (pumps, lights, fans, conversions, UPS…) to power used by compute.
• Not perfect, some folks play games with it. • 2011 survey esLmates industry average is 1.8. • Typical data center, half of power goes to things other than
compute capability.
9
“IT power” + “Facility power” P.U.E. =
“IT power”
10
PUE – Simple and EffecLve
11
-20
0
20
40
60
80
100
0.75
0.85
0.95
1.05
1.15
1.25
1.35
1.45
Out
door
Tem
pera
ture
(°F)
PUE
Data Center PUE
-20
0
20
40
60
80
100
0.75
0.85
0.95
1.05
1.15
1.25
1.35
1.45
Out
door
Tem
pera
ture
(°F)
PUE
Data Center PUE
Outdoor Temperature
“I am re-‐using waste heat from my data center on another part of my site and my PUE is 0.8!”
ASHRAE & friends (DOE, EPA, TGG, 7x24, etc..) do not allow reused energy in PUE & PUE is
always >1.0.
Another metric has been developed by The Green Grid +; ERE – Energy Reuse EffecLveness.
condensa?on. – Provide cooling with higher temperature coolant.
• Eliminate expensive & inefficient chillers. • Save wasted fan energy and use it for
compuLng. • Unlock your cores and overclock to increase
throughput!
19
Liquid Cooling – Overview
Water and other liquids (dielectrics, glycols and refrigerants) may be used for heat removal. • Liquids typically use LESS transport energy (14.36 Air to Water Horsepower ra?o for example below).
• Liquid-‐to-‐liquid heat exchangers have closer approach temps than Liquid-‐to-‐air (coils), yielding increased outside air hours.
20
2011 ASHRAE Liquid Cooling Guidelines
NREL ESIF HPC (HP hardware) using 24 C supply, 40 C return –W4/W5
21
NREL HPC Data Center
Showcase Facility • 10MW, 929 m2 • Leverage favorable climate • Use direct water to rack
cooling • DC manager responsible for
ALL DC cost including energy! • Waste heat captured and used
to heat labs & offices. • World’s most energy efficient
data center, PUE 1.06! • Lower CapEx and OpEx.
Leveraged exper0se in energy efficient buildings to focus on showcase data center.
Chips to bricks approach
• Opera?onal 1-‐2013, Petascale+ HPC Capability in 8-‐2013
• 20-‐year planning horizon 5 to 6 HPC genera?ons.
High Performance CompuLng
22
CriLcal Data Center Specs
• Warm water cooling, 24C Water much beger working fluid than air -‐ pumps trump fans.
U?lize high quality waste heat, 40C or warmer.
+90% IT heat load to liquid.
• High power distribuLon 480VAC, Eliminate conversions.
• Think outside the box Don’t be sa?sfied with an energy efficient data center nestled on campus surrounded by inefficient laboratory and office buildings.
Innovate, integrate, op?mize.
Dashboards report instantaneous, seasonal and cumulaLve PUE values.
23
• Data center equivalent of the “visible man” – Reveal not just boxes with blinky lights, but the inner workings of
the building as well. – Tour views into pump room and mechanical spaces – Color code pipes, LCD monitors
NREL ESIF Data Center Cross SecLon
24
• 2.5 MW – Day one capacity (Utility $500K/yr/MW)
• 10 MW – Ultimate Capacity
• Petaflop • No Vapor Compression
for Cooling
Data Center
25
Summer Cooling Mode
PUE – Typical Data Center = 1.5 – 2.0 NREL ESIF= 1.04 * 30% more energy efficient than your typical “green” data center
Data Center
26
Winter Cooling Mode
ERE – Energy Reuse Effectiveness How efficient are we using the waste heat to heat the rest of the building? NREL ESIF= .7 (we use 30% of waste heat) (more with future campus loops)
Future Campus Heating Loop
Future Campus Heating Loop
High Bay Heating Loop
Office Heating Loop
Conference Heating Loop
Data Center
27
95 deg Air
75 deg Air
• Water to rack Cooling for High Performance Computers handles 90% of total load
• Air Cooling for Legacy Equipment handles 10% of total Load
Data Center – Cooling Strategy
28
PUE 1.0X -‐-‐ Focus on the “1”
Facility PUE
IT Power Consumption
Energy Re-use
We all know how to do this!
True efficiency requires 3-D optimization.
29
Facility PUE
IT Power Consumption
Energy Re-use
We all know how to do this!
Increased work per watt Reduce or eliminate fans Component level heat exchange Newest processors are more efficient.
True efficiency requires 3-D optimization.
PUE 1.0X -‐-‐ Focus on the “1”
30
Facility PUE
IT Power Consumption
Energy Re-use True efficiency requires 3-D optimization.
We all know how to do this!
Increased work per watt Reduce or eliminate fans Component level heat exchange Newest processors are more efficient.
Direct liquid cooling, Higher return water temps Holistic view of data center planning
PUE 1.0X -‐-‐ Focus on the “1”
31
What’s Next?
ü Energy Efficient supporLng infrastructure. ü Pumps, large pipes, high voltage (380 to 480) electrical to rack
ü Efficient HPC for planned workload. ü Capture and re-‐use waste heat.
Can we manage and “opLmize” workflows, with varied job mix, within a given energy “budget”? Can we do this as part of a larger “ecosystem”?
31 Steve Hammond
32
Other Factors
32 5
DemandSMART: Comprehensive Demand Response Balancing supply and demand on the electricity grid is difficult and expensive. End users that provide a balancing resource are compensated for the service.
Annual Electricity Demand As a Percent of Available Capacity
50%
100%
Winter Spring Summer Fall
75%
25%
90%
4MW solar
Use waste heat
Better rates, shed load
DC as part of Campus Energy System
33
ParLng Thoughts
• Energy Efficient Data Centers – been there, done that – We know how, let’s just apply best practices. – Don’t fear H20: Liquid cooling will be increasingly prevalent.
• Metrics will lead us into sustainability – If you don’t measure/monitor it, you can’t manage it. – As PUE has done; ERE, Carbon Use Effectiveness (CUE), etc. will help drive
sustainability. • Energy Efficient and Sustainable Computing – it’s all about the “1”
– 1.0 or 0.06? Where do we focus? Compute & Energy Reuse. • Holistic approaches to Energy Management.
– Lots of open research questions. – Projects may get an energy allocation rather than a node-hour allocation.
• When considering liquid cooled systems, insist that providers adhere to the latest ASHRAE water quality spec or it could be costly.
39
2011 ASHRAE Liquid Cooling Guidelines
40
2011 ASHRAE Thermal Guidelines
2011 Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance. White paper prepared by ASHRAE Technical Commi?ee TC 9.9
41
Energy Savings PotenLal: Economizer Cooling
Energy savings poten?al for recommended envelope, Stage 1: Economizer Cooling.12
(Source: Billy Roberts, NREL)
42
Data Center Energy
• Data centers are energy intensive faciliLes. – 10-‐100x more energy intensive than an office.
– Server racks well in excess of 30kW.
– Power and cooling constraints in exis?ng facili?es.
• Data Center inefficiency steals power that would otherwise support compute capability.
• Important to have DC manager responsible for ALL DC cost including energy!
43
Energy Savings PotenLal: Economizer + Direct EvaporaLve Cooling
Energy savings poten?al for recommended envelope, Stage 2: Economizer + Direct Evap. Cooling.12
(Source: Billy Roberts, NREL)
44
Energy Savings PotenLal: Economizer + Direct Evap. + MulLstage Indirect Evap. Cooling
Energy savings poten?al for recommended envelope, Stage 3: Economizer + Direct
Evap. + Mul?stage Indirect Evap. Cooling.12 (Source: Billy Roberts, NREL)
45
Data Center Energy Efficiency
• ASHRAE 90.1 2011 requires economizer in most data centers. • ASHRAE Standard 90.4P, Energy Standard for Data Centers and
Telecommunica0ons Buildings • PURPOSE: To establish the minimum energy efficiency requirements
of Data Centers and TelecommunicaLons Buildings, for: • Design, construcLon, and a plan for operaLon and maintenance • SCOPE: This Standard applies to: • New, new addiLons, and modificaLons to Data Centers and
TelecommunicaLons Buildings or porLons thereof and their systems • Will set minimum PUE based on climate. • More detail at : h?ps://www.ashrae.org/news/2013/ashrae-‐seeks-‐
3. Install economizer (air or water) and evaporaLve cooling (direct or indirect). 4. Raise discharge air temperature. Install VFD’s on all computer room air
condiLoning (CRAC) fans (if used) and network the controls. 5. Reuse data center waste heat if possible. 6. Raise the chilled water (if used) set-‐point.
Increasing chiller water temp by 1°C reduces chiller energy use by about 3%
7. Install high efficiency equipment including UPS, power supplies, etc.. 8. Move chilled water as close to server as possible (direct liquid cooling). 9. Consider centralized high efficiency water cooled chiller plant
Air-‐cooled = 2.9 COP, water-‐cooled = 7.8 COP
Energy ConservaLon Measures
47
Equipment Environmental SpecificaLon
Air Inlet to IT Equipment is the important specification to meet
Outlet temperature is not important to IT Equipment
48
Recommended Range (Statement of Reliability) Preferred facility opera?on; most values should be within this range.
Allowable Range (Statement of FuncLonality) Robustness of equipment; no values should be outside this range.
MAX ALLOWABLE
RACK INTAKE TEMPERATURE
MAX RECOMMENDED Over-‐Temp
Recommended Range
Under-‐Temp
MIN RECOMMENDED
MIN ALLOWABLE
Allowable Range
Key Nomenclature
49
Improve Air Management
• Typically, more air circulated than required.
• Air mixing and short circuiLng leads to: – Low supply temperature – Low Delta T
• Use hot and cold aisles. • Improve isolaLon of hot
and cold aisles. – Reduce fan energy – Improve air-‐condi?oning
efficiency – Increase cooling capacity
49
Hot aisle/cold aisle configuraLon decreases mixing of intake &
Courtesy of Henry Coles, Lawrence Berkeley Na0onal Laboratory
54
“Chill-‐off 2” EvaluaLon of Close-‐coupled Cooling SoluLons
Courtesy of Geoffrey Bell and Henry Coles, Lawrence Berkeley Na0onal Laboratory
less energy use
55
Cooling Takeaways…
• Use a central plant (e.g. chiller/CRAHs) vs. CRAC units • Use centralized controls on CRAC/CRAH units to prevent
simultaneous humidifying and dehumidifying. • Move to liquid cooling (room, row, rack, chip) • Consider VSDs on fans, pumps, chillers, and towers • Use air-‐ or water-‐side free cooling. • Expand humidity range and improve humidity control (or