Data Centres - Today’s Changing Landscape - BICSI · PDF fileData Centres - Today’s Changing Landscape ... • Technology is key ... tracking IT load as it shrinks and grows

Post on 15-Feb-2018

224 Views

Category:

Documents

3 Downloads

Preview:

Click to see full reader

Transcript

Data Centres - Today’s ChangingLandscape

Tony Day Global Director Data Centre Projects & Professional Services for

APC by Schneider Electric

Solutions of the pastare not sufficient for the future

The world’s energy system is at a crossroads. Curre nt global trends in energy supply and consumption are

patently unsustainablepatently unsustainablepatently unsustainablepatently unsustainable…

But that can – and must – be altered; theretheretherethere’’’’s still time to s still time to s still time to s still time to

change the road wechange the road wechange the road wechange the road we’’’’re onre onre onre on

… what is needed is nothing short of an energy energy energy energy

revolutionrevolutionrevolutionrevolution

Source: World Energy Outlook

The ICT sector

• Technology is key – earlier what used to be done via Human interaction has become technology driven – this is becoming a key business differentiation.

• “A crucial part of the solution to climate change” –European Commission Communication to the Parliament on “Mobilizing Information & Communication Technologies to facilitate the transition to an energy-efficient low-carbon economy”

• The way to “save the first billion tons “ of carbon –World Wildlife Fund.

• “The intelligent application of ICT can reduce our annual global emissions by 15% by 2020 (approx 7.8Gt of carbon dioxide equivalent to carbon savings five times larger than the total emissions from the IT sector and €600B of cost savings)-The Global eSustainability Initiative SMART 2020 report.

Double-edged squeeze

The PLANET

Reduce carbon footprint !Reduce carbon footprint !

The BUSINESS

More computing per watt !More computing per watt !

Data center planning and operation is under increasing pressures

Energy and service Energy and service cost control pressurecost control pressure

Increasing availability Increasing availability expectationsexpectations

Regulatory Regulatory requirementsrequirements

Server Server consolidationconsolidation

Rapid changes in Rapid changes in IT technologyIT technology

High density High density blade server blade server power/heatpower/heat

Dynamic power Dynamic power variationvariation

UncertainUncertainlonglong --term plans for term plans for capacity or densitycapacity or density

In response, we need to change the way the In response, we need to change the way the world designs, installs, operates, manages, and world designs, installs, operates, manages, and

maintains data centersmaintains data centers

Time to valueTime to value

Increasing regulation & codification● UK Carbon Reduction Commitment (CRC)● EU Code of Conduct for Datacentre’s (EU CoC)● EU Energy Performance Building Directive● Revision of Part L of The Building Regulations

2000 “Conservation of fuel and power● Climate Change Levy● F-Gas regulation● EU Eco-design Directive for Energy Using

Products, Energy Star, Eurovent certification● BREEAM, LEED certification,● Carbon Trust ECA Scheme● Green Grid metrics PUE/DCIE● BCS● Financial instruments● Standards ANSI-TIA-942-2005 / ASHRAE TC9.9

etc, ● Legislators should always engage and

consult with the industry to ensure the mosteffective means of dealing with the issues ofenergy consumption are implemented.

• The Green Grid global consortium is dedicated to standards, measurement, processes, and technologies to improve data center performance

• The U.S. Environmental Protection Agency (EPA) is defining data center efficiency standards (Energy Star ratings)

• The European Commission’s Institute for Energy is defining a “Code of Conduct” for data center efficiency

• Enterprise companies are starting to make public carbon commitments

Industry-wide movement is underway to shape policy and behavior

is a founding member and on the board of directors of The Green Grid

Data center governance

Efficiency canEfficiency can ’’ t be ignored anymoret be ignored anymore

Lessons Learned?

“The farther backward you can look, the farther forward you

are likely to see."Winston Churchill

ENIAC - 1945

1970/1985 - Big Iron

IBM 370/168 –1972/1980

•Standard Hardware

•Standard Applications

•Service Levels

•High Cost

•Little Flexibility

1985/1995 - Client/Server Revolution

•Platform Choice

•Many Suppliers

•Chaotic Management

•Low Service Quality

1995/2000 - Internet Revolution (a.k.a. The Dotcom Bubble)

Unnamed Colo , London 2000

•Increased Access

•Uncontrolled Growth

•Inefficient Deployments

•Unintegrated Applications

•Minimal Management

•Low Service

2000/2009 - Age of Maturity

Ferrari 2005

•Fewer Platform Choices

•Consolidation

•Improved Measurement

•SLA’s/ Metrics

•Virtualisation

The New Decade – Into the Cloud•Datacenters no longer a collection of disparate storage devices, communications equipment and servers

•Hardware & software resources working to deliver unparalleled levels of efficiency, availability & performance

•Holistic approach to design & deployment

•The datacenter is the computer

Imperatives for the next decade

• Faster time to value• Higher availability• Lower total cost of ownership

–Capex–Installation–Energy Costs–Maintenance

• Better efficiency• Integrated management systems

The Newest Challenge: EFFICIENCY

Provide power and cooling in the amount needed,

when needed, and wherewherewherewhere needed – but no more

than what is required for redundancy and safety

margins

But we canBut we canBut we canBut we can’’’’t manage what we cant manage what we cant manage what we cant manage what we can’’’’t t t t

measuremeasuremeasuremeasure

Efficiency target:

POWER

system

POWER

system

COOLING

system

COOLING

system

Datacenter Efficiency Data Center Physical Infrastructure

IT

Rack-row-room-building

EndEnd --toto --end, wrapend, wrap --around portfolio enables around portfolio enables holistic designholistic design for highest availability and efficiencyfor highest availability and efficiency

Key Energy consumption data points

• Typically 50% of power going into a datacenter goes to the power and cooling systems – NOT to the IT loads

• Every kW saved in a datacenter saves about £630 ($1,000) per year

• A 1% improvement in datacenter infrastructure efficiency (DCiE) corresponds to approximately 2% reduction in electrical bills

• Every kW saved in a datacenter reduces carbon dioxide emissions by 5 tonnes per year

• The typical 1MW (IT load) datacenter is continuously wasting about 400kW due to poor design (DCiE = 50%, instead of best-practice 70%)

66

White paper

Inefficiencies Create Consumption

• Computing inefficiencies > More servers

• Server inefficiencies > More power and cooling• Power and cooling inefficiencies > More power

consumption

Inefficiencies drive both power consumption Inefficiencies drive both power consumption and material consumptionand material consumption

2008 IDC study

Rising densities have led to an increase in power and cooling issues

Power and cooling:

The new #1 datacenter issue

Causes of Energy Inefficiency in Datacenters

Primary Causes• Over sizing of power and cooling equipment• Pushing cooling systems to cool densities higher than they were designed for• Redundancy (for high availability)• Inefficient power and cooling equipment• Ineffective airflow patterns

Secondary Causes• Mainly operational practices

– ineffective room layout– Inefficient operating settings of cooling equipment– Clogged air or water filters– Disabled or malfunctioning cooling economizer modes– Raised floor clogged with wires

CPU Power Consumption

● Accepting 20% utilization as average the IBM figures say the power dedicated to computation is about 2% of the total data centre power…….

● From Dell’s figures using the same utilization ratio we get 1.6%....

Changes to impact efficiency and power consumption

• At top of pyramid changes will have a minuscule impact on power consumption for the data centre as a whole yet they can have a dramatic impact in the data centre efficiency ie useful computations done.

• Increase CPU utilisation 20% to 60%-80% or more (headroom to ensure more responsive to workload peaks), consolidation/virtualisation

• server refresh can potentially double the output per CPU if the servers are two years old or more than quadruple it if four years old.

• At bottom plain energy saving

EXPECTED LOAD

% C

AP

AC

ITY

YEARS

ROOM CAPACITY

INSTALLED CAPACITY

ACTUAL LOAD

Waste due to over sizingWaste due to over sizing

Reference: APC White paper #37

120%120%

100%100%

80%80%

60%60%

40%40%

20%20%

0%0%

00 11 22 33 44 55 66 77 88 99 1010

Over sizing Datacenter capacity

Convert 2N power and cooling into N+N● Double the IT load from the same power input

• Downside is that some load will need to be dropped in case of power or cooling failure• However – UPS + generator backup on each power feed and cooling can be hardened

too• All critical load is also duplicated in a second DR facility too, so the worst-case impact

is 25%• Business needs to sign off on extra risk and need for preventative maintenance outage

Proposed Dedicated Blade Hall Layout● Assumes 2.1MW @ 2N power (4.2MW @ N), 200KW per compute cell● 10-20 cells, depending on 2N or N+N choice

If “A” or “B” power or cooling completely fails, half of the “N+N”load will be required to shut down. The “2N” load will remain fully operational.

Note that the A/B split is across the centre of a compute cell. This will permit half the cell and half the cooling to remain operational, which gives added HA benefits for grid and clustered usage across each side of the cell.

Tiering-four levels relating to availability and security ANSI/TIA-942-2005

“Is it safe?”

●Would the business not want any additional risk regardless of any potential benefits?

●Recent trends in an increasing power constrained and regulated market are a movefrom Tier 3 or Tier 3+ to Tier 2 and in some cases two Tier1 or three Tier 1 facilities in a delta spaced within the latency limits.

● The modular datacentre (depending on the configuration of its building blocks) offers the opportunity to have multiple tier levels within the same facility and/or use of the lower Tiers to benefit from a better PUEwithout any real loss in availability and security.

The Datacenter Power Chain

PUE

The Datacenter Power Chain

Take the power to the DC or the DC to the power ?

denser with more frequent change, focus on efficiency and cost

●Blade technology adoption as a mainstream architecture

●Consolidation & Virtualisation●Dynamic cooling loads

●Cloud Computing (External / Internal)

●Warehouse-Scale Computer (WSC)● Thin-client architecture

Together with the above, the recent economic turmoil and the ever shrinking business horizon has driven the need for more flexible, scaleable, modular DC facilities

Virtualisation

“Virtualisation is the new operating system for the Datacenter”

Michael Dell IDC ME CIO Conference, Dubai Jan 2010

Minimize the inefficiency of over sizing during virtualization and re-growth

and be prepared for higher densities to come

LoadLoad

CapacityPower/cooling

LoadVirtualized Load

LoadOriginalLoad

Original

LoadVirtualizedLoad

Virtualized

LoadLoadLoad

VirtualizedLoad

Virtualized

LoadVirtualizedLoad

Virtualized

Scale DOWN Scale UPRack density

Scalable infrastructure minimizes waste

Scalable power and cooling results in better PUE by Scalable power and cooling results in better PUE by tracking IT load as it shrinks and growstracking IT load as it shrinks and grows

Raising the efficiency curve

Fixed losses and consolidation

Need to reduce fixed losses

Infrastructure

ITUsefulcomputing

Infrastructure

ITUsefulcomputing

Physical

infrastructure

(DCPI)Watts

IN

Watts

IN

Watts

IN

Watts

IN

Watts

IN

Watts

IN

ITwattsIT

watts

ITwattsIT

wattsIT

wattsIT

watts

=

LOWER is better, 1 is perfectLOWER is better, 1 is perfect

Datacenter Efficiency Metric – Power Usage Effectiveness (PUE)

158

PUE/DCiE-Define basis of calculation●Data center efficiency metrics PUE

and DCiE have yet to develop intoformal detailed standards that define what devices or subsystems should be included in the metric and which should not. Therefore, APC by Schneider Electric has standardized on its own criteria for what devices or subsystems are included andexcluded. The table shows all the common subsystems of a data center and separates them into one of three categories: IT Load, Physical Infrastructure, and Not Included. The physical infrastructure category includes subsystems such as UPS, PDU, generator, etc.

Efficiency Analysis of Proposed Data Centre Concept

●An efficiency model was developed for the proposed data center concept which resulted in an estimated annualized PUE of 1.22 at 100% load or a DCiE of 82% at 100% load . This model provides the entire efficiency curve which allows efficiency estimates at any load level. For comparison, a typical data center in this location at 100% load would have a PUE of 1.76 or a DCiEof 57%. Both of these curves are illustrated in the figure above.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

0.0% 20.0% 40.0% 60.0% 80.0% 100.0%

% IT loadD

CiE

Microsoft concept

Typical data center0.00

1.00

2.00

3.00

4.00

5.00

6.00

0.0% 20.0% 40.0% 60.0% 80.0% 100.0%

% IT load

PU

E

Microsoft concept

Typical data center

Drivers of infrastructure efficiency gains

Baseline: Average of existing installed base

Goal: Reduce industry average PUE from 2.13 to 1.39How to get there: Relative contribution of data center improvements

Cooling ECONOMIZERS

Convert from ROOM COOLING to dynamic ROW/RACK cooling

RIGHT-SIZING via modular power and cooling

HigherUPS EFFICIENCY

415/240 V TRANSFORMERLESS power distribution (NAM)

DYNAMIC CONTROL OF COOLING PLANT(VFD fans, pumps, chillers)

32% contribution.24 PUE reduction

16% contribution.12 PUE reduction16% contribution

.12 PUE reduction

16% contribution.12 PUE reduction

10% contribution.07 PUE reduction

10% contribution.07 PUE reduction

PUE

2.13

1.39

PUE efficiency KPI is the industry standard metric for measuring and benchmarking datacenter energy efficiency

Maximum PUE = Server Virtualisation + Scalable Power and Cooling infrastructure

Scalable Power and Cooling infrastructure and IT virtualization can DOUBLE electrical savings.

Row and Rack cooling architecture addresses increasinghigher IT densities requirements.

Efficiency dashboard

InfraStruXure Central ®

Enterprise portal

Supervision

Supervision

Integration is key to optimise efficiency

PROCESS& MACHINES

BUILDINGCONTROL

WHITE SPACE

POWER

SECURITY Supervision

Energy data access at low cost

Energy managementservices

Datacenter Electrical Efficiency Assessments

• Breakdown power, cooling & lighting losses

• Breakdown cooling losses into CRAC, humidification, outdoor heat losses

• Breakdown power losses into UPS, power distribution

• Calculate DCiE• Detailed recommendations to

improve efficiency• Projected efficiency gains for each

recommended improvement

Software Design Tools

Data Center Operations Management

Security & Environmental•NetBotz•Pelco

Data Center Physical Infrastructure

Energy Management•PowerLogic IONE•PowerLogic Meters•Cisco Energywise•IBM Active Energy Manager

Enterprise Management Systems•Microsoft System Center Operations Manager (SCOM)

•Microsoft System Center Essentials•IBM Tivoli

Building Management Systems•Schneider TAC Product Line, including Andover Conti nuum, Vista

•All major brands

InfraStruXure Central

•Centralized & real-time monitoring•Fault notification & graphical trending•Thresholds & alarm settings•Auto-discovery •Mass configuration•Multi-vendor device support

InfraStruXureOperations

Capacity & Change Manager

Ensure capacity meets demand at row, rack, and server level

• Incremental build out• Standardized components• Targeted density,

efficiency, availability

Modular/standardized/scalable architecture Modular/standardized/scalable architecture reduces both upfront AND operating costsreduces both upfront AND operating costs

““““Step and repeatStep and repeatStep and repeatStep and repeat”””” strategystrategystrategystrategy

“POD” Architecture

Capital Expenditure

"It's a very sobering feeling to be up in space and realize that one's safety factor was determined by the lowest bidder on a government contract."

Alan Shepard (Astronaut 1923-1998)

● A “mini data center”with its own cooling

● Contributes no heat to rest of data center

●Works with existing room-based cooling

● Hot/cool air circulation localized within the pod by short air paths and/or containment

● Achieves optimal efficiency

● Targeted availability

Targeted Zone Architecture

134

White paper

Large Standardised Scalable Facilities

Using the industrial shed

Start with the low hanging fruit first

Beyond the crinkly tin

Prefabricated modular rooms

● Normal DC room heights – 3.8m

● Normal DC aisle widths – 1.35m

● Takes deep racks – 1.2m

● Each room with two 18m rows

● 500kW critical load per room

● N+1 cooling, dual feed power

● Weathered – use externally or internally

● Transportable

Prefabricated substations

Std ISO container with UPS & Chillers

• Utility support for power and cooling

• IT Rack Box

Containers stacked inside – MS Chicago

Modular, scalable using containers

We have been there before

The Modular scaleable DC - tecnikon shown at ECTA Vienna in 2000

• Tecnikon MDC 58RMU air-cooled sealed IP65 fully ducted racks12.5kW per rack

• Pre-assembled fully wired spines (rows – 20 racks per row)

• Plug-in plant modules, stair and lift towers

• Factory built/prefabricated, transported by road, -can be relocated for reuse elsewhere, fast build.

Cooling with external air

Modular approach

Masdar City – Exemplar for ecological design

Summary

Thank YouThank YouThank YouThank You

top related