To 30kW and Beyond: High-Density Infrastructure Strategies that Improve Efficiencies and Cut Costs
Mar 14, 2016
To 30kW and Beyond:High-Density Infrastructure Strategies that
Improve Efficiencies and Cut Costs
© 2010 Emerson Network Power
Emerson Network Power: The global leader in enabling Business-Critical Continuity
Emerson Technologies
Uninterruptible Power Power Distribution Surge Protection Transfer Switching DC Power Precision Cooling High Density Cooling Racks Rack Monitoring Sensors and Controls KVM Real-Time Monitoring Data Center Software
© 2010 Emerson Network Power
Emerson Network Power –An organization with established customers
© 2010 Emerson Network Power
• Emerson Network Power overview
• “High Density Equals Lower Cost: High Density Design Strategies for Improving Efficiency and Performance,” Steve Madara, Vice President and General Manager, Liebert North America Precision Cooling, Emerson Network Power
• “Sandia National Laboratories’ Energy Efficient Red Sky Design,” David Martinez, Facilities Coordinator, Computing Infrastructure and Support Operations, Sandia National Laboratories
• Question and Answer session
Presentation topics
High Density Equals Lower Cost: High Density Design Strategies for Improving Efficiency and Performance
Steve MadaraVice President and General Manager Liebert North America Precision CoolingEmerson Network Power
© 2010 Emerson Network Power
© 2010 Emerson Network Power
• Industry trends and challenges• Three strategies for enhancing cooling efficiency• High density equals lower cost
Agenda
© 2010 Emerson Network Power
Industry Trends and Challenges
© 2010 Emerson Network Power
Traditional cooling unit
• Server design– Higher ∆T across the server raising leaving air temperatures
Trends and challenges
© 2010 Emerson Network Power
• Regulatory – ASHRAE 90.1 Standard – 2010 Revisions– Refrigerant legislation
• Discussion around lower dew point limits• Raising cold aisle temperatures
– Intended to improve cooling capacity and efficiency– Monitor the impact on server fans
• No cooling
Trends and challenges
© 2010 Emerson Network Power
Spring 2010 Data Center Users’ Group survey
Average: ~8kW
Average: 10-12kW
4% 28%
9%
36%
30%
12%
21%
4%
15%
3%
4%
1%
3%3%
11%
15%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Now
In two
years
Average kW per Rack
5%
3%
20%
10%
23%
20%
16%
16%
9%
14%
8%
7% 11%
11%
19%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Now
In two
years
Maximum kW per Rack
2 kW or less >2 — 4 kW >4 — 8 kW>8 — 12 kW >12 — 16 kW >16 — 20 kW>20 — 24 kW Greater than 24 kW Unsure
11% Average: >12kW
Average: 14-16kW
© 2010 Emerson Network Power
40%
26%
23%
23%
23%
18%
16%
13%
2%
0% 5% 10% 15% 20% 25% 30% 35% 40% 45%
Experienced hot spots
Run out of power
Experienced an outage
Experienced a “water event”
N/A — Have not had any issues
Run out of cooling
Run out of floor space
Excessive energy costs
Other
Spring 2010 Data Center Users’ Group survey: Top data center issues
© 2010 Emerson Network Power
14%
16%
16%
28%
5%
7%
1%
11%
6%
5%
4%
5%
11%
2%
5%
3%
6%
3%
14%
14%
23%
38%
24%
44%
14%
38%
18%
10%
7%
11%
14%
18%
9%
10%
18%
18%
28%
31%
22%
6%
35%
18%
47%
13%
35%
29%
28%
24%
3%
16%
18%
25%
15%
20%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Already implemented Plan to implementStill considering Considered, but decided againstWill not consider Unsure
Solar array
DC power
Spring 2010 Data Center Users’ Group survey: Implemented or considered technologies
Fluid economizer on chiller plant
Fluid economizer using dry coolers
Air economizer
Cold aisle containment
Containerized/ modular data center
Wireless monitoring
Rack based cooling
© 2010 Emerson Network Power
14%
16%
16%
28%
5%
7%
1%
11%
6%
5%
4%
5%
11%
2%
5%
3%
6%
3%
14%
14%
23%
38%
24%
44%
14%
38%
18%
10%
7%
11%
14%
18%
9%
10%
18%
18%
28%
31%
22%
6%
35%
18%
47%
13%
35%
29%
28%
24%
3%
16%
18%
25%
15%
20%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Already implemented Plan to implementStill considering Considered, but decided againstWill not consider Unsure
Solar array
DC power
Spring 2010 Data Center Users’ Group survey: Implemented or considered technologies
Fluid economizer on chiller plant
Fluid economizer using dry coolers
Air economizer
Containerized/ modular data center
Wireless monitoring
Cold aisle containment
Rack based cooling
© 2010 Emerson Network Power
• Legacy data center designs with poor airflow management resulted in significant by-pass air– Typical CRAC unit has a ΔT of around 21 F at 72 F return air by design– Low ΔT servers and significant by-pass air resulted actual lower CRAC
ΔT and thus lower efficiency
72 F
78 F60 F
58 F
CRAC
ΔT -16 F
CFM –100%
Server4 kw
ΔT – 18 F
CFM –70%
By-pass air – 30% at 60 F Design ΔT - 21 FOperating ΔT - 16 F
Impact of design trends on the cooling system
© 2010 Emerson Network Power
• Even with improved best practices, newer servers can create challenges in existing data centers– Server ΔT greater than CRAC unit ΔT capability requires more airflow
to meet the kW server load– Requires even more careful management of the return air to the cooling
unit as a result of the high exiting temperatures at the server.
75 F
91 F56 F
54 F
CRAC
ΔT -21 F
CFM –100%
Server8 kw
ΔT – 35 F
CFM –60%
By-pass air – 40% @ 56F Hot aisle75 F
Impact of higher temperature rise servers
© 2010 Emerson Network PowerExpertise & Support
Improving performance of the IT infrastructure and
environment
Balancing high levels of
availability and efficiency
Adapting to IT changes for continuous
optimization and design flexibility
Delivering architectures from 10-60kW/rack to minimize
space and cost
™
Introducing Efficiency Without Compromise
© 2010 Emerson Network Power
Three Strategies for Enhancing Cooling Efficiency
© 2010 Emerson Network Power
• Higher return air temperatures increase the cooling unit capacity and efficiency
• Increases the CRAC unit ΔT• Increases the SHR
Strategy 1: Getting the highest temperature to the cooling unit
© 2010 Emerson Network Power
CRAC CRACCRAC
Rac
k
Rac
k
Rac
k
Rac
k
Rac
k
Rac
k
Rac
k
Rac
k
• Impacted by– The length of the rack rows (long
rows difficult to predict)– Floor tile placement– Rack consistency and load
• Improved with– Ducted returns– Local high density modules
With containment, hot air wrap-around is eliminated and server supply
temperatures are controlled
• Containment can be at the room or zone level
• Supply air control provides consistency between containments
• Can be used in combination with high density modules or row based cooling
Without containment, hot air wrap-around will occur and will limit max
return temperatures
Raising the temperature at the room level
CRAC CRACCRAC
© 2010 Emerson Network Power
Strategy 2: Providing the right cooling and airflow
• Efficiency is gained when:Server Load (kW) = Cooling Unit Capacity (kW)
Server Airflow = Cooling Unit Airflow• Challenges
– Rising server ΔT result in higher by-pass air to meet the cooling load
– Requires variable speed fans and independent control of the airflow from the cooling capacity
© 2010 Emerson Network Power
• Reduce cooling unit fan power • Maximize use of economization mode to reduce compressor hours of
operation (chiller / compressor)• Improve condenser heat transfer and reduce power• Direct cooling – eliminate fans and compressor hours
Strategy 3: Provide the most efficient heat rejection components
© 2010 Emerson Network Power
High Density Equals Lower Cost
© 2010 Emerson Network Power
High density equals lower cost
Adding 2000 kW of IT at 20kW/rack vs. 5kW/rack• Smaller building for high density• Fewer racks for high density • Capital equipment costs more• Equipment installation costs higher• High density cooling is more efficient
Cost DifferenceLow Density vs. High Density
Building Capital Costs @ $250/sq. ft. ($1,875,000)Rack and PDU Capital Costs @ $2,500 each ($750,000)
Cooling Equipment Capital Costs $320,000 Installation Costs $750,000
Capital Cost Total ($1,555,000)Cooling Operating Costs (1yr) ($420,480)
Total Net Savings of a High Density Design ($1,975,480)5 yr Total Net Savings of High Density ($3,657,400)
© 2010 Emerson Network Power
Liebert XD
CoolingUnit
Fan power- 8.5kW per 100 kW of cooling
Average entering air temperature of 72-
80 F
Traditional Cooling
Fan power- 2 kW per 100 kW of cooling
Average entering air temperature of
95-100 FChiller Capacity
0.00
0.20
0.40
0.60
0.80
1.00
1.20
1.40
Traditional CW CRAC CW Enclosed Rack Refrigerant Modules
kW C
hille
r Cap
acity
per
kW
of S
ensi
ble
Hea
t Loa
d Latent LoadFan LoadSensible Load
Total cost of ownership benefits
• 65% less fan power• Greater cooling coil effectiveness• 100% sensible cooling• 20% less chiller capacity required• Overall 30% to 45% energy savings
yielding a payback down to 1 year
© 2010 Emerson Network Power
Solutions transforming high density to high efficiency
XDC or XDPXDV10
Base Infrastructure
(160 kW)
Future pumping units of larger capacity and
N+1 capability
Standard Cooling Modules 10-35 kW
XDO20
Embedded Cooling in
Super Computers
(microchannelintercoolers)
35 – 75kW
Component Cooling without
Server Fans20 - 40 kW
Capable > 100 kW
Rear Door
10-40 kW
New & Future Product Configurations
XDR10/40
XDS20
XDH20/32
© 2010 Emerson Network Power
• Passive rear door cooling module – No cooling unit fans – server air flow only – Optimal containment system– Allows data centers to be designed in a “cascading”
mode• Simplifies the cooling requirements
– Solves problem for customers without hot and cold aisles
– Room neutral – does not require airflow analysis– No electrical connections
Rear door cooling
© 2010 Emerson Network Power
Cas
cade
Effe
ct
Cas
cade
Effe
ct
Customer layout
© 2010 Emerson Network Power
Liebert XDS configuration• A cooling rack which uses the Clustered
Systems cooling technology to move heat directly from the server to the Liebert XD pumped refrigerant system
• Heat is never transferred to the air• Provides cooling for 36 1U servers• Available initially in 20kW capacity with a 40kW
rack in the futureBenefits• No cooling module fans or server fans• 8% to 10% smaller IT power requirements• Chill Off 2 tested at 80 F fluid temperature
result in effective PUE<1• Next opportunity: XDP connection to a cooling
tower without a chiller
Direct server cooling without server fans
CPU
0
CPU
1
© 2010 Emerson Network Power
Energy consumption comparisons
Equivalent PUE<1
© 2010 Emerson Network Power
Industry evolving technology to improve data center cooling efficiency
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for
the United States Department of Energy’s National Nuclear Security Administration under contract DE-
AC04-94AL85000
Sandia National Laboratories’ Energy Efficient Red Sky Design
David J. MartinezFacilities CoordinatorCorporate Computing FacilitiesSandia National Laboratories
© 2010 Emerson Network Power
What happened to consolidation?
• Addresses critical national need for High Performance Computing (HPC)
• Replaces aging current HPC system
Why build Red Sky?
© 2010 Emerson Network Power
140 Racks (Large) 36 Racks (Small)
50 Teraflops Total 10 Teraflops per rack
~13 kW / rack full load ~32 kW / rack full load
518 tons cooling 328 tons cooling
12.7M gal water per yr 7.3M gal water per yr
RED SKYTHUNDERBIRD
System comparison
© 2010 Emerson Network Power
System design implemented three new technologies for power and cooling:
Modular Power Distribution Units
Liebert XDP Refrigerant Cooling Units
Glacier Doors
The facility had to be prepared…..
Red Sky and new technologies
© 2010 Emerson Network Power
3.5 months Zero accidents 0.5 miles copper 650 brazed connections 400 ft. carbon steel 140 welded connections
Facility preparation
© 2010 Emerson Network Power
Facility preparation
© 2010 Emerson Network Power
Facility preparation
© 2010 Emerson Network Power
Facility preparation
© 2010 Emerson Network Power
© 2010 Emerson Network Power
XDP Refrigerant UnitsPumping unit that serves as an isolating interface between the building chilled water system and the pumped refrigerant (134A) circuit
• Operates above dew point• No compressor• Power used to cool computer not
dehumidify • 100% sensible at 0.13 kW per kW cooling
New cooling solutions
Glacier DoorsFirst rack-mounted, refrigerant-based passive cooling system on the market
© 2010 Emerson Network Power
How it works
90% heat load removed with Liebert XDP-Glacier Door combination
Uses Laminar Air Flow concept
Perforated tiles only needed in 1st row
The total cooling solution
© 2010 Emerson Network Power
Laminar air flow
© 2010 Emerson Network Power
Chiller Plant Liebert
XDP
45o
Glacier Door
How it all comes together
© 2010 Emerson Network Power
Chiller Plant
Plate Frame Heat
Exchanger
Plug Fan CRAC Unit
53o
April to Sept.
61o
Oct .to March
52o
Chiller0.46 kW power = 1 ton cooling
Plate Frame Heat Exchanger0.2 kW power = 1 ton cooling
How it all comes together
LiebertXDP
© 2010 Emerson Network Power
Comparison of compute power and footprint
© 2010 Emerson Network Power
Tons of cooling used
© 2010 Emerson Network Power
Annual water consumption
© 2010 Emerson Network Power
Red Sky ThunderbirdCO2E 203 912Footprint 46 205
0
50
100
150
200
250
0
100
200
300
400
500
600
700
800
900
1,000
gha
Tonn
es
Carbon footprint
© 2010 Emerson Network Power
Thunderbird (518 Tons of
Cooling)
Red Sky (328 Tons of
Cooling)kWh Cost $1,324,222 $838,504Kilowatt Hours 15,954,483 10,102,452
0
2,000,000
4,000,000
6,000,000
8,000,000
10,000,000
12,000,000
14,000,000
16,000,000
18,000,000
$0
$200,000
$400,000
$600,000
$800,000
$1,000,000
$1,200,000
$1,400,000
Kilo
wat
t Hou
rs p
er Y
ear
Kilo
wat
t Hou
rs c
ost p
er Y
ear
37% Reduction
Chiller plant power consumption and cost
© 2010 Emerson Network Power
Thunderbird (21 CRACs)
Red Sky (12 XDPs & 3 CRACs)
kWh Cost $126,791 $28,256Kilowatt Hours 1,527,604 340,437
0
200,000
400,000
600,000
800,000
1,000,000
1,200,000
1,400,000
1,600,000
1,800,000
$0
$20,000
$40,000
$60,000
$80,000
$100,000
$120,000
$140,000
Kilo
wat
t Hou
rs U
sed
per Y
ear
Kilo
wat
t Hou
rs C
ost p
er Y
ear
77% Reduction
Energy consumption
© 2010 Emerson Network Power
Q & A
Steve Madara, Vice President and General Manager, Liebert North America Precision Cooling, Emerson Network Power
David J. Martinez, Facilities Coordinator, Corporate Computing Facilities, Sandia National Laboratories
Thank you for joining us!• Look for more webcasts coming this fall!• Follow @EmrsnNPDataCntr on Twitter