Top Banner
Day 1, Session 2 Building the Cloud Fabric
35
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 1.2 build cloud_fabric_final

Day 1, Session 2

Building the Cloud Fabric

Pete Zerger
Consolidate Spaces and SMB features each to one or two slides so we have a single storage section
Page 2: 1.2 build cloud_fabric_final

• Configuring the Storage Layer• Physical Network• Configuring Virtual Networking • Bringing the Hypervisor Under Management

Session 2 Overview

Page 3: 1.2 build cloud_fabric_final

Configuring the Storage Layer

Page 4: 1.2 build cloud_fabric_final

New Technologies in the Storage Layer

Windows Server 2012 introduces technologies at the storage layer that can replace traditional SAN• Storage Spaces • SMB 3.0• Scale-Out File Server

When leveraged together, these enable high performance, ease administration and lower cost

Page 5: 1.2 build cloud_fabric_final

Storage SpacesStorage spaces use a pooling model - affordable commodity hardware is put into a pool and LUNs are created from these pools:• Supports mirroring and parity for resiliency• Works with Windows clustering technologies for

high availability• Existing backup and snapshot-based

infrastructures can be used.Storage Pool

Page 6: 1.2 build cloud_fabric_final

• Virtualization of storage with Storage Pools and Storage Spaces

• Storage resilience and availability with commodity hardware

• Resiliency and data redundancy throughn-way mirroring (clustered or unclustered) or parity mode (unclustered)

• Utilization optimized through thin and trim provisioning and enclosure awareness

• Integration with other Windows Server 2012 capabilities

• Serial Attached SCSI (SAS) and Serial AT Attachment (SATA) interconnects

Storage Spaces

WindowsVirtualizedStorage

Windows Application Server or File Server

Physical or virtualized deployments

PhysicalStorage (Shared) SAS or SATA

Integrated with otherWindows Server 2012 capabilities

Storage Pool

Storage Pool

File Server Administration

Console

Hyper-V

Cluster Shared Volume

Failover Clustering

SMB Multichannel

NFS Windows Storage Mgmt.

NTFS SMB Direct

Storage Space

Storage Space

Storage Space

Page 7: 1.2 build cloud_fabric_final

• Highly available, shared data store for SQL Server databases and Hyper-V workloads

• Increased flexibility, and easier provisioning and management

• Ability to take advantage of existing network infrastructure

• No application downtime for planned maintenance or unplanned failures with failover clustering

• Highly available scale-out file server

• Built-in encryption support

File Server Cluster

Application storage support – SMB 3.0

Cluster Shared Volumes

Single File System Namespace

SMBSingle Logical Server \\Foo\

Share

RAID Array

RAID Array

RAID Array

SAN

WindowsVirtualizedStorage

PhysicalStorage

Storage Pool

Storage Pool

Storage Space Storage Space Storage Space

Hyper-V Cluster

Microsoft SQL Server

Page 8: 1.2 build cloud_fabric_final

Efficient storage through Data Deduplication

VHD Library

Software Deployment

ShareGeneral File

Share

User Home Folder (My Docs)

0% 20% 40% 60% 80% 100%Average savings with Data Deduplication by workload

type

Maximize capacity by removing duplicate data

• 2:1 with file shares, 20:1 with virtual storage• Less data to back up, archive, and

migrate

Increased scale and performance

• Low CPU and memory impact • Configurable compression schedule• Transparent to primary server

workload

Improved reliability and integrity

• Redundant metadata and critical data• Checksums and integrity checks• Increase availability through

redundancy

Faster file download times with BranchCache

Source: “Microsoft Internal Testing"

Page 9: 1.2 build cloud_fabric_final

File Client

SMB ClientSMBBuffe

r

File Server

Application

NIC Driver

Transport Protocol Driver

With RDMA

Improved network performance through SMB Direct (RDMA)Without RDMA

Application

NIC Driver

SMB ServerSMB Client

Transport Protocol Driver

NIC Driver

TransportProtocol Driver

AppBuffe

r

SMBBuffe

r

OSBuffe

r

Driver

Buffer

SMBBuffe

r

OSBuffe

r

Driver

Buffer

SMB Server

NIC Driver

TransportProtocol Driver

AppBuffe

r

SMBBuffe

r

rNIC rNIC NIC AdapterBuffer NICAdapter

BufferAdapterBuffer

AdapterBuffer

iWARP

InfiniBand

• Higher performance through offloading of network I/O processing onto network adapter

• High throughput with low latency and ability to take advantage of high-speed networks (such as InfiniBand and iWARP)

• Remote storage at the speed of direct storage

• Transfer rate of around 50 Gbs on a single NIC port

• Compatible with SMB Multichannel for load balancing and failover

Page 10: 1.2 build cloud_fabric_final

Offloaded Data Transfer (ODX)

OffloadCopy

RequestToken

Write Request

TokenSuccessful Write Result

External Intelligent Storage Array

Virtual Disk Virtual Disk

Actual Data

Token

Benefits:• Rapid virtual machine

provisioning and migration• Faster transfers on large files• Minimized latency• Maximized array throughput• Less CPU and network use• Performance not limited by

network throughput or server use

• Improved datacenter capacity and scale

Offloaded Data Transfer (ODX)Token-based data transfer between intelligent storage arrays

Page 11: 1.2 build cloud_fabric_final

Live migration maintaining Fibre Channel connectivity

Unmediated SAN access with Virtual Fibre Channel

Hyper-V host 1 Hyper-V host 2

Worldwide Name Set B

Worldwide Name Set A

Worldwide Name Set A

Virtual machineVirtual machineLIVE MIGRATION

• Virtualize workloads that require direct access to FC storage

• Live migration support

• N_Port ID Virtualization (NPIV) support

• Single Hyper-V host connected to different SANs

• Up to four Virtual Fibre Channel adapters on a virtual machine

• Multipath I/O (MPIO) functionality

Access Fibre Channel SAN data from a virtual machine

Page 12: 1.2 build cloud_fabric_final

Shared Nothing Live Migration

Demo

Pete Zerger
Storage Spaces demo in NA got whacked, so look for something else.
Page 13: 1.2 build cloud_fabric_final

Storage Automation – Storage Classification

SMI-S Provider

Array1

GOLD(Pool1)

SILVER(Pool2)

Array2

BRONZE(Pool1)

GREEN(Pool2)

Page 14: 1.2 build cloud_fabric_final

Controlling what people should consume

Associate a storage pool and/or logical unit to host group for consumption by hosts/clusters contained in host group

Allocate

Storage

Available storage logical units

Available storage pools

Host groups

Unassigned Storage

Assigned Storage

Page 15: 1.2 build cloud_fabric_final

Storage Classification Options in VMM 2012 SP1

Demo

Page 16: 1.2 build cloud_fabric_final

Hyper-VHyper-VHyper-VHyper-VHyper-VHyper-V

Hyper-V over SMBWhat is it?• Store Hyper-V files in shares over the SMB 3.0 protocol

(including VM configuration, VHD files, snapshots)• Works with both standalone and clustered servers

(file storage used as cluster shared storage)

Highlights• Increases flexibility• Eases provisioning, management and migration• Leverages converged network• Reduces CapEx and OpEx

Supporting Features• SMB Transparent Failover - Continuous availability• SMB Scale-Out – Active/Active file server clusters• SMB Direct (SMB over RDMA) - Low latency, low CPU use• SMB Multichannel – Network throughput and failover• SMB Encryption - Security• VSS for SMB File Shares - Backup and restore• SMB PowerShell - Manageability

File Server

File Server

SharedStorage

Hyper-V

SQLServer

IIS

VDIDesktop

Hyper-V

SQLServer

IIS

VDIDesktop

Hyper-V

SQLServer

IIS

VDIDesktop

Hyper-V Cluster

File Server Cluster

Page 17: 1.2 build cloud_fabric_final

Multiple RDMA NICs

Multiple 1GbE NICs

Single 10GbE

RSS-capable NIC

SMB Server

SMB Client

SMB MultichannelFull Throughput• Bandwidth aggregation with

multiple NICs

• Multiple CPUs cores engaged when NIC offers Receive Side Scaling (RSS)

Automatic Failover• SMB Multichannel implements

end-to-end failure detection

• Leverages NIC teaming (LBFO) if present, but does not require it

Automatic Configuration• SMB detects and uses multiple

paths

SMB Server

SMB Client

SMB Server

SMB Client

Sample Configurations

Multiple 10GbE

in LBFO team

SMB Server

SMB ClientLBFO

LBFO

Switch10GbE

NIC10GbE

NIC10GbE

Switch10GbE

NIC10GbE

NIC10GbE

NIC10GbE

NIC10GbE

Switch1GbE

NIC1GbE

NIC1GbE

Switch1GbE

NIC1GbE

NIC1GbE

Vertical lines are logical channels, not cables

Switch10GbE/IB

NIC10GbE/

IB

NIC10GbE/

IB

Switch10GbE/IB

NIC10GbE/

IB

NIC10GbE/

IB

Switch10GbE

Page 18: 1.2 build cloud_fabric_final

Scale-Out File Share Hyper-V over SMB

Demo

Page 19: 1.2 build cloud_fabric_final

Physical Network

Page 20: 1.2 build cloud_fabric_final

Physical components

Router Switch Compute Storage Edge DevicesFirewallSecurity

Load Balancer

PhysicalNICs

Rack

Page 21: 1.2 build cloud_fabric_final

Core Router

Rack 2

Edge Devices

Rack 1

Top of rackSwitch

ComputeStorage

AggregateSwitch

Page 22: 1.2 build cloud_fabric_final

Host configurationThree options

Converged Option1

10GbE each

VMNVM1

10GbE each

Sto

rage

Live M

igra

tion

Clu

ster

Man

ag

e

Converged Option1+

10GbE each

VMNVM1

10GbE each

Sto

rage

LM Clu

ster

Man

ag

e

Non-converged

1GbE 1GbE 1GbE 10GbEHBA/

10GbE

Sto

rage

Live M

igra

tion

Clu

ster

Man

ag

eVM1 VMN

Converged Option2

VMNVM1

Sto

rage

Live M

igra

tion

Clu

ster

Man

ag

e

CSV/RDMA Traffic 10GbE each

Page 23: 1.2 build cloud_fabric_final

Configuring Virtual Networking

Page 24: 1.2 build cloud_fabric_final

Merging Physical and Logical in VMMLogical Network

• Models the physical network• Separates like subnets and

VLANs into named objects that can be scoped to a site

• Container for fabric static IP address pools

• VM networks are created on logical networks

Logical Switch• Central container for virtual

switch settings• Consistent port profiles across

data center• Add port classifications • Consistent extensions• Compliance enforcement

Page 25: 1.2 build cloud_fabric_final

Configuring Logical Networks 5 - Create and Assign Gateways 4 – Assign Logical Switch 3 - Create Logical Switches 2 - Define VM Networks 1 - Define Logical Networks

Gateway

Tenant 1 Tenant 2

Physical Network

INTERNET

Logical Network

Virtual SwitchLogical Switch

Page 26: 1.2 build cloud_fabric_final

Address PoolsIP POOLS

• Assigned to VMs, vNICs, hosts, and virtual IPs (VIP’s)

• Specified use in VM template creation

• Checked out at VM creation—assigns static IP in VM

• Returned on VM deletion

MAC POOLS VIRTUAL IP POOLS

• Assigned to VMs

• Specified use in VM template creation

• Checked out at VM creation—assigned before VM boot

• Returned on VM deletion

• Assigned to service tiers that use a load balancer

• Reserved within IP Pools

• Assigned to clouds

• Checked out at service deployment

• Returned on service deletion

Page 27: 1.2 build cloud_fabric_final

Configuring Virtual Networking in VMM

Demo

Page 28: 1.2 build cloud_fabric_final

DEMO: Configuring Network Fabric in VMM Define Logical Networks • Datacenter Networks (Isolated VLANs)• Provider Networks (Virtualized Networks)

Define VM Networks • One per VLAN or Virtualized Network

Create logical Switch • Port Classifications & Port Profiles • Switch Extensions

Assign Logical Switch • Host – Add Logical Switch

Create and Assign Gateways (Virtualized Networks)

The Gateway is how Internet access is

provided to isolated tenant VM networks

Page 29: 1.2 build cloud_fabric_final

A Note on Tenant Configuration• Using network virtualization for isolation• NVGRE gateway gives tenants access to

outside world

Without Gateway

• Use a VM with two NICs• One on isolated network, one on

“Internet”

With Gateway

• Private cloud: route to local networks

• Hybrid cloud: create site to site tunnel

Page 30: 1.2 build cloud_fabric_final

Bringing the Hypervisor Under Management

Page 31: 1.2 build cloud_fabric_final

Bringing Hyper-V Hosts Under ManagementVMM provides a lot of flexibility in managing Hyper-V hosts and clusters• Supports domain and workgroup hosts • Windows Server 2008 and 2012 hosts • Add hosts through the UI or PowerShell • Enables drag-and-drop clustering in the VMM

console • Provides RBAC for provisioning access to map to

our “classes of service”

Page 32: 1.2 build cloud_fabric_final

Bringing VMware Hosts Under ManagementA few important points to understand • Connecting VMM to vCenter does not result in a

fundamental change to the datacenter tree • Re-arranging and securing vSphere hosts and host

clusters in VM does NOT affect security within vCenter

• Even if you don’t deploy to vSphere in phase 1, this connectivity brings visibility from an asset management perspective

Page 33: 1.2 build cloud_fabric_final

Managing Hyper-V and VMware Hosts in VMM

Demo

Page 34: 1.2 build cloud_fabric_final

In this module, you learned about:•Configuring the Storage Layer•Physical Network•Configuring Virtual Networking •Bringing the Hypervisor Under Management

Module Summary

Page 35: 1.2 build cloud_fabric_final

©2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Office, Azure, System Center, Dynamics and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.