State of Georgia S&St CttServer & Storage Contractpur.doas.ga.gov/TGM/SWC_Contracts/SWC90813_02_Presentation.pdf · Istanbul 45nm Magny-Cours (6xxx) 45nm 8-12- Core Slide 5 Quad Core
Post on 10-Jul-2020
4 Views
Preview:
Transcript
State of Georgia S & St C t tServer & Storage Contract
Technical PresentationApril 2011
Technical Presentation
Agenda & IntroductionContract Overview – Russ Boles
System x & BladeCenter Servers – Dave Laubscher
Power System Servers – Cindy Henricks
Storage Scott VenutiStorage – Scott Venuti
Slide 2
D L b h
IBM System x and BladeCenter Update
Dave Laubscherdalaubsc@us.ibm.com
x86 Trends
IBM Innovation
Systems & SolutionsSystems & Solutions
Slide 4
Processor Roadmap20102009 201120102009 2011
Dunnington (74xx)45nm
Nehalem EX (75xx)Nehalem EX (65xx)
45nm
Westmere EX32nm
7000 Series 4+ Socket
90, 130W
Intel Core Microarchitecture, Intel I/OAT, Dedicated high speed FSB
4-8 Core, QPI / Direct Attach Memory
95, 130W
6-10 Core, 4 QPI Links, 3MB L2/core
90, 130W
Nehalem EP (55xx)45nm
Westmere EP (56xx)32nm
5000 Series 2 Socket
QPI / Direct Attach Memory
60, 80, 130W
6 Core, 2MB L2/core
95, 130W
2xxx/6xxx6- Core
Istanbul45nm
Magny-Cours (6xxx)45nm
8-12- Core
Slide 5
Quad Core Six Core
2xxx/6xxx Series 79 (55), 115 (75) , 137 (105)W 79 (55), 115 (75) , 137 (105)W
> Six Core
Next Generation Intel® Xeon Processor
Westmere-EP Up to 6 cores / 12 threads per socket with Hyper threading
Based on 32nm process technologyBased on 32nm process technology
12 MB shared cache
Intel Turbo Boost Optimization
Two QPI Links (only 1 is coherent link) per socketTwo QPI Links (only 1 is coherent link) per socket
One integrated memory controller (IMC)
DDR3L support, three DDR channels per socket
Westmere-EX Up to 10 cores/ 20 threads per socket, Nehalem core, 45 nm process, 24MB shared L3
SMT ( H Th di )SMT (~Hyper-Threading)
Turbo Boost
Four QPI Links (3 coherent)
T i t t d t ll (IMC)
Slide 6
Two integrated memory controllers (IMC)
Four buffered memory channels per socket
Enhanced RAS Characteristics
Slide 7http://www.intel.com/Assets/en_US/PDF/whitepaper/323479.pdf
x86 Trends
IBM Innovation
Systems & SolutionsSystems & Solutions
Slide 8
IBM X-Architecture™A d i bl i t f b ildi IBM t i bilitA design blueprint for building proven IBM enterprise capability
into the industry-standard System x product line
Industry Standard Technologies
System x™
Ease of Use Industry Standard Technologies
System Storage™System z™POWER Systems™
iDataPlex™
BladeCenter®
VirtualizationScalabilityReliabilityEfficiency
Availability
Slide 9
eX5 Portfolio - Systems for a Smarter Planet
Consolidation, virtualization, and database workloads being migrated off
System x3850 X5
database workloads being migrated off of proprietary hardware are demanding more addressability
4U / 4-Way
Powerful and scalable system allows some workloads to migrate onto 2 socket design that delivers enterprise
System x3690 X5
Broad coverage for most enterprise applications, server
Scalable
BladeCenter HX5
computing in a dense package
consolidation, virtualized workload enablement.
Demand for minimum footprint as well as integrated networking infrastructure has increased the growth of the blade form factor.
Slide 10
MAX 5 Doubles Memory Capacity
MAX 5 Features:Expand memory capacityy yUp to double the number of memory DIMMs than the competitorsNo impact to memory latencyOver five times the memory capacity in two sockets vs. today’s leading two-socket systemsMAX5 b titi d t CPUMAX5 memory may be partitioned to CPUs or pooled
Provides the memory customers have neededProvides the memory customers have needed for database and virtualization – up to 100%
more virtual machines
Allows higher memory capacity to be reached
Slide 11
with less expensive DIMMs for more economical high end implementations
eX5 is RIGHT for your workloads today, and it’s the only platform that can grow with your business
Memory Expansion 8P, 192DIMMand Scaling with MAX5 (1H 2011)
Native (QPI) Scaling4P, 32D
8P, 128DIMM
Memory Expansionwith MAX5
2P 40D
2P, 64DIMM4P, 96DIMM
Base System
2P, 40D
2P, 32DIMM4P, 64DIMM
Slide 12
2P, 16D
HX5 x3850 X5 x3690 X5
Maximize memoryThe new eX5 portfolio provides as much as twice the amount of
8-socket4-socket2-socket
memory as competitive systems
200200192
6TB
10096
128
2TB
3TB
3240
64 64
MM
s
18DIM
Slide 13
Minimize costEliminate the memory bottleneck and minimize VMware y
licensing costs
82%more VMs
Same 2S license cost 50%license cost
Same number of VMs
x3690 X5 with MAX5 x3690 X5 with MAX5
2S Non IBM server4S Non-IBM server
2S Non-IBM server
Slide 14
Software licensing costs for VMware Enterprise Plus Cost: $3,500 per processor. Memory is constrained before processors are fully utilized.
Minimize costReduce Microsoft SQL software licensing costs by 50 percent using
X5 t h leX5 technology
A 1,000 user SQL Server 2008 database using a two-socket Nehalem-EX system.
Non IBM Nehalem EX ser erNon-IBM Nehalem-EX server System x3690 X5 2-socket Nehalem-EX server
50%50%license cost
Same number of users
SQL Server 2008 Enterprise Edition licensing cost SQL Server 2008 Enterprise Edition licensing cost
US$100,000 US$50,000
Slide 15
Notes:All pricing shown is Microsoft List Pricing as of January 2010 per the pricing information on http://www.microsoft.com/sqlserver/2008/en/us/pricing.aspxPricing model used is per processor licensing which is based on $24,999 per physical socket on the server (logical cores are not counted)
Light Path DiagnosticsLocate and diagnose hardware problems quickly
Exclusive - Rapid Fault Isolation
avoiding or significantly reducing downtime
Works at hardware level – no OS required
Continued Enhancements• Innovative slide-out panel – rack servers
• Exclusive front cover panel – tower servers
• Remind Button• Remind Button
• Persistent Light Path
!.
Slide 16
IBM Innovation in Energy Efficient Technologygy
Slide 17
Increased hardware utilization through Virtual Fabric
Virtual FabricAdapter
Virtual FabricvNIC 1 = 1Gb
vNIC 2 = 3.5Gb
vNIC 3 = 5GbFCoE = 5GbpvNIC 4 = 500Mb
vNIC 1 = 1Gb
vNIC 2 = 3.5Gb
vNIC 3 = 5Gb
vNIC 4 = 500Mb BNT Virtual Fabric 10Gb FCoE = 5Gb
Switch Module
System x Servers:
CNAvNIC
vNIC 1 = 1Gb
vNIC 2 = 3.5Gb
vNIC 4 = 500Mb
vNIC 3 = 5Gb
vNIC 1 = 1Gb
vNIC 2 = 3.5Gb
vNIC 4 = 500Mb
BNT RackSwitch™ G8124vNIC 3 = 5Gb
Slide 18
• Reducing switch, cable and adapter costs and achieving better energy efficiency• Allocate and adjust bandwidth based on actual requirements
x86 Trends
IBM Innovation
Systems & SolutionsSystems & Solutions
Slide 19
Dynamic Infrastructure with System x & BladeCenter“Fit for Purpose”Fit for Purpose
Enterprise
Server Consolidation, Virtualization
iDataPlex
p
BladeCenter
Infrastructure Simplification
Virtualization
Web 2.0, HPC
Scale Out
Scal
eU
p Simplification, Application
Serving
HPC, Grid, Energy
Efficiency
Scale Out
Stand alone & Multiple
Applications
Slide 20
Rack and Tower
x3850 X5: 4-socket 4U x86 (Nehalem EX) platform
(2x) 1975W Rear Access Hot Swap, Redundant P/S
(8x) Memory Cards for 64 DIMMs –8 1066MHz DDR3 DIMMs per card 6x - PCIe Gen2 Slots
(+2 additional)
2x 10Gb Ethernet Adapter
(+2 additional)
(4x) Intel Xeon CPU’s
RAID (0/1) standard
2x 60mm Hot Swap Fans
(8x) Gen2 2.5” Drives or
(2x) 120mm Hot Swap Fans
RAID (0/1) standard, RAID 5/6 Optional
DVD Drive
Slide 21
( )2 FlashPacks
Dual USB Light Path Diagnostics
DVD Drive
x3690 X5: 2-socket 2U (Nehalem EX) platform
(1x) x16 (full height, full length or (2x) x8 (1 full size, 1 full height / half length)
(2x) PCIe x8 Low Profile(1x) PCIe x4 Low Profile for RAID
Scaling ports
(4x) N+N 675W Rear Access Hot Swap Redundant P/S
32 x DDR3 Memory DIMMs16 in upper mezzanine (pictured)16 below
Light Path Diagnostics
8x Memory Buffers
DVD Drive
(16x) Gen2 2.5” Drives or3 FlashPacks
(4x) 60mm Hot Swap Fans
Dual USB
Slide 22
3 FlashPacks
MAX5 for x3690 X5 and x3850 X5
Lightpath Diagnostics
Hot swap fans System removes from chassis for easy access
32 Memory DIMMs
chassis for easy access
QPI Ports
Firehawk Chipset
EXA Ports
p
QPI attaches to systems
EXA Scalability to other memory drawers
Slide 2323
Memory Buffers
drawers
Dynamic Infrastructure with System x & BladeCenter“Fit for Purpose”Fit for Purpose
Enterprise
Server Consolidation, Virtualization
iDataPlex
p
BladeCenter
Infrastructure Simplification
Virtualization
Web 2.0, HPC
Scale Out
Scal
eU
p Simplification, Application
Serving
HPC, Grid, Energy
Efficiency
Scale Out
Stand alone & Multiple
Applications
Slide 24
Rack and Tower
System x High Volume Portfolio
x3650 M3The Ultimate Business
x3550 M3General Business
x3500 M3 Business Critical
x3400 M3General Business with
Exceptional quality, value, performance and ease of use from the trusted leader
x3755 M34 Sockets with The Ultimate Business
Critical ServerGeneral Business Compact Server
Business Critical Tower Server
General Business with Enterprise Features
4 Sockets with 2 Socket Economics
x3250 M3x3620 M3 x3200 M3x3630 M3 x3250 M3Big Performance
at Entry Value
x3620 M3The Value
Storage Server
x3200 M3Entry Value
Business Engine
x3630 M3The Storage
Dense Server
Leadership Brand, Quality & Service
Delivering Value
Leading Technology
Slide 25
Delivering ValueEasy to Deploy, Manage & Service
System x3630 M32-Socket, 2U, storage-rich
Maximizing internal storage density for companies on tight budgets2U St Ri h S
A New Approach Delivering Lowest Cost per Terabyte
, , g
2U Storage Rich Sever –High capacity, choice of low cost 3.5” HDDs or 2.5” HDDs, supporting up to 28TB storage, provides large storage capacity without going external
3630 M3x3630 M3
Processor Support two 40W-95W Westmere processors, up to 2.66GHz (4-core) & 2.93GHz (6-core)Support two 80W Xeon 5500 Series Processors, up to 2.26GHz (4-core)
Memory Support 2DPC @ 1333MHz for Westmere CPU SKUSupport 12 RDIMM (1.5V and 1.35V) up to 96 GB of memorySupport 12 RDIMM (1.5V and 1.35V), up to 96 GB of memory
StorageSupport up to 14x 3.5" Hot Swap SAS/SATA HDDs or up to 28x 2.5" Hot Swap SAS/SATA HDDsSupport SAS/SATA (Allow mixing different drives), up to 28TB diskSupport 6Gbps RAID infrastructure – backplane, RAID card
ModelD fi iti
Support Westmere and select Nehalem CPU SKUsS t 6Gb RAID d t H t S d lDefinition Support 6Gbps RAID adapter on Hot Swap models
Power Supply Redundant power supply base 1x675W (Added PSU options for field upgrade to redundancy)
System Mngt IBM Tools support – IBM Director, ToolsCenter, and IMM
Service WW Enablement (3 year onsite world class support standard)
Slide 26
( y pp )
Workloads Backup server, Web 2.0, Imagery, Video/Photo sharing, On-line gaming, Blog, Messaging, Web searching, Video recording, Mail server (Notes and Exchange), Transactional data, File/Print
The Dual-Socket 2U IBM x3620 M3
Provides a value offering that balances cost and function
Cost Optimized Alternative Dual-Socket 2U Server
Choice of SW or full HW advanced RAIDBasic or full redundancy, LV, 80, and 95W procs, SS SATA or HS SASSS SATA or HS SAS
x3620 M3
Processor Support two 40W-95W Westmere processors, up to 2.66GHz (4-core) & 2.93GHz (6-core)Support two 80W Xeon 5500 Series Processors, up to 2.26GHz (4-core)
Support 2DPC @ 1333MHz for Westmere CPU SKUMemory Support 2DPC @ 1333MHz for Westmere CPU SKUSupport 12 RDIMM (1.5V and 1.35V), up to 96 GB of memory
StorageSupport up to 4x 3.5" Simple Swap SATA HDDs or up to 8x 3.5" Hot Swap SAS/SATA HDDsSupport SAS/SATA (Allow mixing different drives), up to 16TB diskSupport 6Gbps RAID infrastructure – backplane, RAID card
ModelDefinition
Support Westmere and select Nehalem CPU SKUsSupport SW RAID for Simple Swap modelSupport 6Gbps RAID and low cost 3Gbps RAID adapter on Hot Swap models
Power Supply Redundant power supply base 1x675W (Added PSU options for field upgrade to redundancy)
Slide 27
System Mngt IBM Tools support – IBM Director, ToolsCenter, and IMM
Service WW Enablement (3 year onsite world class support standard)
Dynamic Infrastructure with System x & BladeCenter“Fit for Purpose”Fit for Purpose
Enterprise
Server Consolidation, Virtualization
iDataPlex
BladeCenter
Infrastructurep
Virtualization
Web 2.0, HPC Infrastructure
Simplification, Application
Serving
Scale Out
Scal
eU
pHPC, Grid, Energy
Efficiency
Scale Out
Stand alone & Multiple
Applications
Slide 28
Rack and Tower
iDataPlex – Innovative Design
Rotated the rack 90 degrees• Doubles the density in same
footprint p• Half depth, front-access servers
- Low air-flow impedance• Greater cross-section for RDHx
100 U Rack100 U Rack• 16 Vertical U for switches and
PDU’s
1200mm4 4
1280mm
TOne
600mm (840mm w/RDHx)
1200mm
446 x 520m
446 x 520m
1020mm
444 x 700mm
444 x 700mm
1050mm
Two Standard 19” Racks
iDataPlex Rack
Slide 29
w/RDHx)mm
mm
m
Dynamic Infrastructure with System x & BladeCenter“Fit for Purpose”Fit for Purpose
Enterprise &
Server Consolidation, Virtualization
iDataPlex
p
BladeCenter
Infrastructure Simplification
Virtualization
Web 2.0, HPC
Scale Out
Scal
eU
p Simplification, Application
Serving
HPC, Grid, Energy
Efficiency
Scale Out
Stand alone Applications
Slide 30
Rack and Tower
Extend blade benefits to your entire businessChassis tailored to your specific needs…
New Models withNew Models with 2320W PS
Upgrades for existing BC E
New More efficient 2980W PS
IBM BladeCenter EEnterprise, best energy,
best density IBM BladeCenter TRuggedized
IBM BladeCenter S
Upgrades for existing BC H
IBM BladeCenter HTRuggedized,
high performance
IBM BladeCenter HHigh performance
IBM BladeCenter SDistributed, small office,
easy to configure
110v Power!
A common set of blades
A common set of industry-standard switches and I/O fabrics
Slide 31
A common management infrastructurehttp://www.80plus.org/manu/psu/psu_detail.aspx?id=111&type=1
IBM BladeCenter x86 Blade Servers HS12
Most affordable blade for single-threaded apps
HS22“No Compromise” Enterprise Blade
HS22VHigh performance virtualization blade
1 Socket 2 Socket 2 Socket
New!
1 SocketProc: Xeon QCMem: 6 DIMM / 24GBHDD: 2 HS
Key Features
2 SocketProc: Xeon QCMem: 12 DIMM / 192GBHDD: 2 HS
Key Features
2 SocketProc: Xeon QCMem: 18 DIMM / 288GBSSD: 2
y• 2 hot-swap HDDs• SAS/SATA/SSD• Optional battery backed cache
• Max mem per socket
y• 2 HS HDDs with choice of SAS or SSD• Large memory on a blade• Embedded hypervisor• Battery-backed cache option• IMM
Key Features• 2P, 18 DIMMs• 2 SSDs in 30mm• Embedded hypervisor• Maximum memory• IMM
Sample Applications
• E-mail / CollaborationH t d Cli t
Sample Applications• Virtualization• E-mail/CollaborationH t d Cli t
Sample Applications
Vi t li ti
Slide 32
• Hosted Client• Web Serving
• Hosted Client• Web Serving
• Virtualization• E-mail/Collaboration• Hosted Client• Web Serving
HS22 Blade Optional RAID 5w/ battery-backedwrite back cache
“No compromise” next generation enterprise blade
Internal USB(Embedded Hypervisor)
write-back cache
Dual GbE LOM(5709S)
2x Intel Xeon 5600 Processors(Westmere EP)
(Embedded Hypervisor) (5709S)
What makes the HS22 special?What makes the HS22 special?
Additional FeaturesIMM & UEFI
p•Uses Tylersburg 36 chipset (vs. Tylersburg-24)•More PCI lanes for expansion (x8 vs x4)•Support for more 10G networks
p•Uses Tylersburg 36 chipset (vs. Tylersburg-24)•More PCI lanes for expansion (x8 vs x4)•Support for more 10G networks
2 I/O E i Sl t
IMM & UEFILight Path Diagnostics
TPM 1.2Expansion blades
BCE, BCH, BCS, BCHT
pp•Supports 130W processors
pp•Supports 130W processors
2 I/O Expansion Slots(1x CIOv + 1x CFFh)
Slide 33
2x hot-swap drive bays(SAS or Solid State)
12x VLP DDR3 Memory(192GB Max / 1333MHz Max)
HS22V DetailMax memory DP blade for virtualization and HPCMax memory DP blade for virtualization and HPC
Internal USB(Embedded Hypervisor)
Dual 1GbE LOM
2x EPProcessors
What makes the HS22V special?•Uses Tylersburg 36 chipset•Supports 130W processors•Internal SSDs HW RAID protected
What makes the HS22V special?•Uses Tylersburg 36 chipset•Supports 130W processors•Internal SSDs HW RAID protected
Same I/O as HS22(1x CIOv + 1x CFFh)
Additional Features
pp
2x 1.8” SSDNHS RAID 0/1
Additional FeaturesIMM & UEFIPState CappingTPM 1.2Light Path DiagnosticsPCI E i Bl d
Slide 34
NHS RAID 0/118x VLP DDR3 Memory
(288GB Max / 1333MHz Max)
PCI Expansion Blade2DPC @ 1333MHz
HX5 Blade
2x Intel Nehalem-EX
16x VLP DDR3 Memory
2x SSD drives (1.8”)
2x Intel Nehalem EXProcessors
Dual & redundant
I/O and Power
Slide 35
2x IO Expansion Slots(1x CIOv + 1x CFFh)
HX5 4-Socket Blade4x IO Expansion Slots
4x Intel Xeon 7500
4x IO Expansion Slots(2x CIOv + 2x CFFh)
32x VLP DDR3 Memory (16 per node)4x SSD drives (1.8”)
(2 per node)
4x Intel Xeon 7500Series CPUs(2 per node)
16x memory buffers
2x 30mm nodes
(8 per node)
HX5 ConfigurationsAdditional Features
Scale Connector
Slide 36
g2S, 16D, 8 I/O ports, 30m
4S, 32D, 16 I/O ports, 60mm
Internal USB for embedded hypervisorDual & redundant I/O and PowerIMM & UEFI
MAX5 for HX5MAX5
HX5
6x memory buffersFireHawk
6 SMI lanes4 QPI ports3 scalability ports
Slide 37
24x VLP DDR3 memoryHX5+MAX5 Configurations2S, 40D, 8 I/O ports, 60m
3 scalability ports
Why IBM BladeCenter?
Open Architecture
Investment Protection
Flexibility & Choice
Open Networking
Total Cost Implications
Slide 38
Flexible and open I/O from an ecosystem of partners
Standard Speed Switches High Speed Switches
of partners
Standard SpeedIO Expansion Card
High SpeedIO Expansion Card
Ethernet – Fibre Channel – InfiniBand – FCoE – SAS – iSCSI – Virtual Fabric
Slide 39
Cisco Nexus 400ISwitch Features:
14 ports provide 10Gb to each blade6 uplinks at 10Gb - SFP+ basedLow LatencySeamlessly integrates with Nexus family of productsManagement via CiscoWorks orManagement via CiscoWorks or DCNM
Investment Protection: 1Gb and 10Gb auto-negotiation1Gb and 10Gb auto negotiationWorks with both 1Gb and 10Gb adapters on the serverConnects to 1Gb or 10Gb ports using ith SFP SFP d l i theither SFP or SFP+ modules in the
uplinks ports Future Proof:
Optional upgrade to IO Convergence
Slide 40
Optional upgrade to IO Convergence through a license key
Cisco Nexus FamilyCisco Nexus FamilyComplete data center class switching portfolioConsistent data center operating system across all platformsp g y pInfrastructure scalability, transport flexibility and operational manageability
Nexus 7000(M d l S it h
Nexus 2000Nexus 5000
(Modular Switch Platform)
Nexus 1000V(Virtual Switch) Nexus 4000
(Blade Switch)
1K1KCisco Nexus 1000V
2008
x86
(Fabric Extender)
Nexus 5000(Fixed Config
Switch)
(Blade Switch)
NX-OS Data Center Operating System
Slide 41
Data Center Network Manager
Converged NetworkingC Fibre Uplink to StorageCisco 5010 Switch
Cisco 4001i 10Gb Twinax Uplink to 5010
SwitchUplink to 5010
Slide 42
Advanced Management Modules
BNT Virtual Fabric 10Gb SwitchFIRST FCoE Ready Switch for blades!
BNT Virtual Fabric 10Gb Switch
14 down & 10-ports of Uplink bandwidth at less $500 per portp p $ p pConsumes 75W of power – less than a regular light bulb!Investment Protected:
Can connect to 1Gb or 10Gb datacenter infrastructure (1G or 10G (uplinks)Can support 1Gb or 10Gb adaptersCan support convergence network traffic (via future firmware upgrade)upgrade)
Throughput performance no longer an issueUp to 40 ports (10Gb each) per BladeCenter H chassisUp to 40Gb per blade serverp p
Simplified Management over 10GbSerial over LAN & cKVM support
Slide 43
VMready with BNTVirtual Machine Aware Networking: VMready
Measure and manage VM trafficAssign network parameters per Virtual MachineConsistent network settings during VM migrationsSwitch Resident code – nothing to change on the g gserverWorks with all hypervisorsShipping in production since Feb 2009Shipping in production since Feb 2009• BLADE OS 6.1 provides 2nd gen VMready with VMware
integration
Slide 44 44
New Brocade Converged Solution
Brocade Converged 10GbE Switch Module
Brocade 2-port 10GbE Converged Network Adapter
The First integrated Converged Switch with Enet and FC Ports in same package
Multi-protocol support (10GbE, CEE, FCoE, FC and SW iSCSI) for maximum IO flexibility
Dynamic Ports on Demand (DPOD) provides scalability so you buy only what you needDynamic Ports on Demand (DPOD) provides scalability so you buy only what you need
More ports, more bandwidth, and greater flexibility than HP’s FlexFabric or Cisco UCS
Slide 45
IBM BladeCenter Open FabricIBM BladeCenter Open FabricAn integrated server I/O portfolio providing a comprehensive
set of interconnects and smart management tools
Supported interconnects: Management Software:EthernetFibre ChannelSerial Attached SCSI (SAS)
BladeCenter Open Fabric ManagerIBM Systems Director
Serial Attached SCSI (SAS)iSCSIInfiniBand
Tivoli® Intelligent OrchestratorBNT SmartConnect
Slide 46
Supported across virtually ALL chassis, blades and switches
What’s the problem?
LANLUN
MAC MAC WWN WWNMAC MAC WWN WWN
X X X X
Slide 47
Blade Deployment Made Easy
BOFM B i Ad d M t M d l b d Fi O ti
What is Open Fabric Manager?
BOFM Basic: Advanced Management Module based Firmware OptionI/O address assignment for initial blade deployment and re-deployment
Slot-based I/O address assignment – Enet MAC, FC WWNs, SAS WWNs, FC boot
targets and SAS boot targets
Pre-assignment allows LAN/SAN configuration prior to blade installation
Automatic re-assignment on blade swap (aka blade rip/replace)
Slide 4848
Open Fabric Manager BladeCenter
WW
N
1W
WN
2
WW
N
3
WW
N
4ManagementModuleManagementModule Boot
Blade 1 Blade 1
Switc
h
Boot
LUN
Blade 2 Blade 2
rpris
e SA
N S
.
Boot
LUN
Ente
r
. .
. . .
Blade 14 Blade 14
Slide 49
Fibre Channel World Wide Namesare assigned to blade slot by AMM
QLogic, Cisco, Brocade Switch Module
Open Fabric Manager - BasicBladeCenter
WW
N
3
WW
N
4ManagementModuleManagementModule Boot
Blade 1 Blade 1 WW
N
1W
WN
2
Switc
h
Boot
LUN
Blade 2 Blade 2
rpris
e SA
N S
.
SpareBlade SpareBlade
Boot
LUN
WW
N
3W
WN
4
Ente
r
. .
. . .
Blade 14 Blade 14
Slide 50
Fibre Channel World Wide Namesare assigned to blade slot by AMM
QLogic, Cisco, Brocade Switch Module
Blade Deployment Made Easy
BOFM B i Ad d M t M d l b d Fi O ti
What is Open Fabric Manager?
BOFM Basic: Advanced Management Module based Firmware OptionI/O address assignment for initial blade deployment and re-deployment
Slot-based I/O address assignment – Enet MAC, FC WWNs, SAS WWNs, FC boot
targets and SAS boot targets
Pre-assignment allows LAN/SAN configuration prior to blade installation
Automatic re-assignment on blade swap (aka blade rip/replace)
BOFM Advanced Upgrade: Standalone Utility or IBM SystemsDirector ExtExt
I/O Address assignment plus policy based automation
Creates standby blade pools – Event Action Plans
Provides I/O parameter and VLAN migration
Slide 5151
Provides I/O parameter and VLAN migration
to standby blades in case of blade failure
Open Fabric Manager - AdvancedBladeCenter
WW
N
1
WW
N
2ManagementModuleManagementModule Boot
Blade 1 Blade 1
Switc
h
Boot
LUNWW
N
1W
WN
2
Blade 2 Blade 2
rpris
e SA
N S
.
WW
N
3W
WN
4
Boot
LUN
Ente
r
. .
SpareSpare
. . .
Blade 14 Blade 14 SpareBlade SpareBlade
Slide 52
Fibre Channel World Wide Namesare assigned to blade slot by AMM
QLogic, Cisco, Brocade Switch Module
Advanced Open Fabric Manager Failover Scenarios
Example 1: Intra-Chassis FailoverExample 3: Inter and Intra-Chassis Failover
Scenarios
Fully Automated F ilFailoverSupports up to 100 chassis
Multiple Blade Source Pool with Single Standby Pool
chassisManage up 1400 blade fail-over poolpDirector Extension or Stand-alone
fi ticonfigurationsNetwork and storage switch independent
Slide 53
Multiple Blade Source Pool with Multiple Blade Standby Pool
Example 2: Multi-Blade Standby Pool
Cross Chassis Source and Standby Blade Pools
switch independent
Dynamic Infrastructure with System x & BladeCenter“Fit for Purpose”Fit for Purpose
Enterprise &
Server Consolidation, Virtualization
iDataPlex
p
BladeCenter
Infrastructure Simplification
Virtualization
Web 2.0, HPC
Scale Out
Scal
eU
p Simplification, Application
Serving
HPC, Grid, Energy
Efficiency
Scale Out
Stand alone Applications
Slide 54
Rack and Tower
Ci d H i k
IBM Power System Server Update
Cindy HenricksPower Systems Specialist
Power 7 Systems Portfolio
Power 780Power 795
Power 770
Power 750
Power 755Power 700
Power 755
HPCHPC
701 & 702Power 710
Power 720Power 730
Slide 5656
Power 740
Processor Technology Roadmap
POWER8
POWER6POWER7
45 nm
POWER4
POWER5130 nm
65 nm
Dual Core Chip Multi Processing
Dual CoreEnhanced ScalingSMTDi t ib t d S it h
Dual CoreHigh Frequencies Virtualization +Memory Subsystem +
Multi CoreOn-Chip eDRAM Power Optimized CoresM S b t
180 nm
Chip Multi ProcessingDistributed SwitchShared L2Dynamic LPARs (32)
Distributed Switch +Core Parallelism +FP Performance +Memory bandwidth +Virtualization
Memory Subsystem +Altivec Instruction RetryDyn Energy MgmtSMT +Protection Keys
Mem Subsystem ++SMT++Reliability +VSM & VSX (AltiVec)Protection Keys+
Concept Phase
Slide 57
2001 2004 2007 2010
top related