Top Banner
Implementing Affordable Disaster Recovery with Hyper-V and Multi-Site Clustering Greg Shields, MVP Partner and Principal Technologist www.ConcentratedTech.com
55
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Implementing dr w. hyper v clustering

Implementing Affordable Disaster Recovery with Hyper-V and

Multi-Site Clustering

Greg Shields, MVPPartner and Principal Technologist

www.ConcentratedTech.com

Page 2: Implementing dr w. hyper v clustering

This slide deck was used in one of our many conference presentations. We hope you enjoy it, and invite you to use it

within your own organization however you like.

For more information on our company, including information on private classes and upcoming conference appearances, please

visit our Web site, www.ConcentratedTech.com.

For links to newly-posted decks, follow us on Twitter:@concentrateddon or @concentratdgreg

This work is copyright ©Concentrated Technology, LLC

Page 3: Implementing dr w. hyper v clustering

What Makes a Disaster?Which of the following would you consider a disaster?

● A naturally-occurring event, such as a tornado, flood, or hurricane, impacts your datacenter and causes damage. That damage causes the entire processing of that datacenter to cease.

● A widespread incident, such as a water leakage or long-term power outage, that interrupts the functionality of your datacenter for an extended period of time.

● A problem with a virtual host creates a “blue screen of death”, immediately ceasing all processing on that server.

● An administrator installs a piece of code that causes problems with a service, shutting down that service and preventing some action from occurring on the server.

● An issue with power connections causes a server or an entire rack of servers to inadvertently and rapidly power down.

Page 4: Implementing dr w. hyper v clustering

What Makes a Disaster?Which of the following would you consider a disaster?

● A naturally-occurring event, such as a tornado, flood, or hurricane, impacts your datacenter and causes damage. That damage causes the entire processing of that datacenter to cease.

● A widespread incident, such as a water leakage or long-term power outage, that interrupts the functionality of your datacenter for an extended period of time.

● A problem with a virtual host creates a “blue screen of death”, immediately ceasing all processing on that server.

● An administrator installs a piece of code that causes problems with a service, shutting down that service and preventing some action from occurring on the server.

● An issue with power connections causes a server or an entire rack of servers to inadvertently and rapidly power down.

DIS

AS

TE

R!

JUS

T A

BA

D

DA

Y!

Page 5: Implementing dr w. hyper v clustering

What Makes a Disaster?• Your decision to “declare a disaster” and move to

“disaster ops” is a major one.• The technologies used for disaster protection are

different than those used for high-availability.● More complex.● More expensive.

Page 6: Implementing dr w. hyper v clustering

What Makes a Disaster?• Your decision to “declare a disaster” and move to

“disaster ops” is a major one.• The technologies used for disaster protection are

different than those used for high-availability.● More complex.● More expensive.

• Failover and failback processes involve more thought.● You might not be able to just “fail back” with a click of a

button.

Page 7: Implementing dr w. hyper v clustering

A Disastrous Poll

• Where are We? Who Here is…● Planning a DR Environment?● In Process of Implementing One?● Already Enjoying One?● What’s a “DR Environment” ???

Page 8: Implementing dr w. hyper v clustering

Multi-Site Hyper-V == Single-Site Hyper-V

• DON’T PANIC: Multi-site Hyper-V looks very much the same as single-site Hyper-V.

● Microsoft has not done a good job of explaining this fact!● Some Hyper-V hosts.● Some networking and storage.● Virtual machines that Live Migrate around.

Page 9: Implementing dr w. hyper v clustering

Multi-Site Hyper-V == Single-Site Hyper-V

• DON’T PANIC: Multi-site Hyper-V looks very much the same as single-site Hyper-V.

● Microsoft has not done a good job of explaining this fact!● Some Hyper-V hosts.● Some networking and storage.● Virtual machines that Live Migrate around.

• But there are some major differences too…● VMs can Live Migrate across sites.● Sites typically have different subnet arrangements.● Data in the primary site must be replaced with the DR site.● Clients need to know where your servers go!

Page 10: Implementing dr w. hyper v clustering

Constructing Site-Proof Hyper-V:Three Things You Need

• At a very high level, Hyper-V disaster recovery is three things:● A storage mechanism● A replication mechanism● A set of target servers and a cluster to receive

virtual machines and their data• Once you have these three things,

layering Hyper-V atop is easy.

Page 11: Implementing dr w. hyper v clustering

Constructing Site-Proof Hyper-V:Three Things You Need

PrimaryHyper-V Server

PrimaryHyper-V Server

Storage Device Storage Device

BackupHyper-V Server

BackupHyper-V Server

Backup Site

Storage Device(s)

Replication Mechanism

Target Servers

Page 12: Implementing dr w. hyper v clustering

Thing 1:A Storage Mechanism• Typically, two SANs in two different locations

● Fibre Channel , iSCSI, FCoE, heck JBOD.● Often similar model or manufacturer. ● This similarity can be necessary (although not required) for

some replication mechanisms to function property.

Page 13: Implementing dr w. hyper v clustering

Thing 1:A Storage Mechanism• Typically, two SANs in two different locations

● Fibre Channel , iSCSI, FCoE, heck JBOD.● Often similar model or manufacturer. ● This similarity can be necessary (although not required) for

some replication mechanisms to function property.

• Backup SAN doesn’t necessarily need to be of the same size or speed as the primary SAN

● Replicated data isn’t always full set of data.● You may not need disaster recovery for everything.● DR Environments: Where Old SANs Go To Die.

Page 14: Implementing dr w. hyper v clustering

Thing 2:A Replication Mechanism

• Replication between SANs must occur.● There are two commonly-accepted ways to accomplish this….

Page 15: Implementing dr w. hyper v clustering

Thing 2:A Replication Mechanism

• Replication between SANs must occur.● There are two commonly-accepted ways to accomplish this….

• Synchronously● Changes are made on one node at a time. ● Subsequent changes on primary SAN must wait for ACK from

backup SAN.

• Asynchronously● Changes on backup SAN will eventually be written. ● Changes queued at primary SAN to be transferred at intervals.

Page 16: Implementing dr w. hyper v clustering

Thing 2:A Replication Mechanism

• Synchronously● Changes are made on one node at a time. Subsequent changes on

primary SAN must wait for ACK from backup SAN.

Storage DevicePrimary Site

Storage DeviceBackup Site

Change Committed at Primary Site

Change Replicated to Secondary Site

Change Committed at Secondary Site

Acknowledge of Change Returned to

Primary Site

Change Complete

Page 17: Implementing dr w. hyper v clustering

Thing 2:A Replication Mechanism

• Asynchronously● Changes on backup SAN will eventually be written. Are queued at

primary SAN to be transferred at intervals.

Storage DevicePrimary Site

Storage DeviceBackup Site

Change 1 Committed at Primary Site

Change 2 Committed at Primary Site

Change 3 Committed at Primary Site

Changes Replicated to Secondary Site

Change 4 Committed at Primary Site

Page 18: Implementing dr w. hyper v clustering

Class DiscussionWhich would you choose? Why?

Page 19: Implementing dr w. hyper v clustering

Class DiscussionWhich would you choose? Why?Synchronous

● Assures no loss of data.● Requires a high-bandwidth and low-latency connection.● Write and acknowledgement latencies impact performance.● Requires shorter distances between storage devices.

Asynchronous● Potential for loss of data during a failure.● Leverages smaller-bandwidth connections, more tolerant of latency.● No performance impact.● Potential to stretch across longer distances.

Your Recovery Point Objective makes this decision…

Page 20: Implementing dr w. hyper v clustering

Thing 2½:Replication Processing Location

• There are also two locations for replication processing…

Page 21: Implementing dr w. hyper v clustering

Thing 2½:Replication Processing Location

• There are also two locations for replication processing…• Storage Layer

● Replication processing is handled by the SAN itself.● Agents are often installed to virtual hosts or machines to ensure

crash consistency.● Easier to set up, fewer moving parts. More scalable.● Concerns about crash consistency.

• OS / Application Layer● Replication processing is handled by software in the VM OS.● This software also operates as the agent.● More challenging to set up, more moving parts. More installations

to manage/monitor. Scalability and cost are linear.● Fewer concerns about crash consistency.

Page 22: Implementing dr w. hyper v clustering

Thing 3:Target Servers and a Cluster

• Finally are target servers and a cluster in the backup site.

Hyper-VServer

Hyper-VServer

Storage Storage

Backup Site

NetworkSwitch

NetworkSwitch

NetworkSwitch

NetworkSwitch

Page 23: Implementing dr w. hyper v clustering

Clustering’s Sordid History

• Windows NT 4.0● Microsoft Cluster Service “Wolfpack”.● “As the corporate expert in Windows clustering, I recommend you don’t use

Windows clustering.”

Page 24: Implementing dr w. hyper v clustering

Clustering’s Sordid History

• Windows NT 4.0● Microsoft Cluster Service “Wolfpack”.● “As the corporate expert in Windows clustering, I recommend you don’t use

Windows clustering.”• Windows 2000

● Greater availability, scalability. Still painful.• Windows 2003

● Added iSCSI storage to traditional Fibre Channel.● SCSI Resets still used as method of last resort (painful).

Page 25: Implementing dr w. hyper v clustering

Clustering’s Sordid History

• Windows NT 4.0● Microsoft Cluster Service “Wolfpack”.● “As the corporate expert in Windows clustering, I recommend you don’t use

Windows clustering.”• Windows 2000

● Greater availability, scalability. Still painful.• Windows 2003

● Added iSCSI storage to traditional Fibre Channel.● SCSI Resets still used as method of last resort (painful).

• Windows 2008● Eliminated use of SCSI Resets.● Eliminated full-solution HCL requirement.● Added Cluster Validation Wizard and pre-cluster tests.● Clusters can now span subnets (ta-da!)

Page 26: Implementing dr w. hyper v clustering

Clustering’s Sordid History

• Windows NT 4.0● Microsoft Cluster Service “Wolfpack”.● “As the corporate expert in Windows clustering, I recommend you don’t use

Windows clustering.”• Windows 2000

● Greater availability, scalability. Still painful.• Windows 2003

● Added iSCSI storage to traditional Fibre Channel.● SCSI Resets still used as method of last resort (painful).

• Windows 2008● Eliminated use of SCSI Resets.● Eliminated full-solution HCL requirement.● Added Cluster Validation Wizard and pre-cluster tests.● Clusters can now span subnets (ta-da!)

• Windows 2008 R2● Improvements to Cluster Validation Wizard and Migration Wizard.● Additional cluster services.● Cluster Shared Volumes (!) and Live Migration (!)

Page 27: Implementing dr w. hyper v clustering

So, What IS a Cluster?

Page 28: Implementing dr w. hyper v clustering

So, What IS a Cluster?

Quorum Drive & Storage for Hyper-V VMs

Page 29: Implementing dr w. hyper v clustering

So, What IS a Multi-Site Cluster?

Hyper-V ServerHyper-V Server

iSCSIStorage

iSCSIStorage

Backup Site

NetworkSwitch

NetworkSwitch

NetworkSwitch

NetworkSwitch

Witness Server

Witness Site

Page 30: Implementing dr w. hyper v clustering

Quorum: Windows Clustering’s Most Confusing Configuration

• Ever been to a Kiwanis meeting…?

Page 31: Implementing dr w. hyper v clustering

Quorum: Windows Clustering’s Most Confusing Configuration

• Ever been to a Kiwanis meeting…?• A cluster “exists” because it has quorum between its

members. That quorum is achieved through a voting process.

● Different Kiwanis clubs have different rules for quorum.● Different clusters have different rules for quorum.

Page 32: Implementing dr w. hyper v clustering

Quorum: Windows Clustering’s Most Confusing Configuration

• Ever been to a Kiwanis meeting…?• A cluster “exists” because it has quorum between its

members. That quorum is achieved through a voting process.

● Different Kiwanis clubs have different rules for quorum.● Different clusters have different rules for quorum.

• If a cluster “loses quorum”, the entire cluster shuts down and ceases to exist. This happens until quorum is regained.

● This is much different than a resource failover, which is the reason why clusters are implemented.

• Multiple quorum models exist.

Page 33: Implementing dr w. hyper v clustering

Four Options for Quorum

• Node and Disk Majority• Node Majority• Node and File Share Majority• No Majority: Disk Only

Page 34: Implementing dr w. hyper v clustering

Four Options for Quorum

• Node and Disk Majority• Node Majority• Node and File Share Majority• No Majority: Disk Only

Page 35: Implementing dr w. hyper v clustering

Four Options for Quorum

• Node and Disk Majority• Node Majority• Node and File Share Majority• No Majority: Disk Only

Page 36: Implementing dr w. hyper v clustering

Four Options for Quorum

• Node and Disk Majority• Node Majority• Node and File Share Majority• No Majority: Disk Only

Page 37: Implementing dr w. hyper v clustering

Quorum in Multi-Site Clusters

• Node and Disk Majority• Node Majority• Node and File Share Majority• No Majority: Disk Only

• Microsoft recommends using the Node and File Share Majority model for multi-site clusters.

● This model provides the best protection for a full-site outage.● Full-site outage requires a file share witness in a third

geographic location.

Page 38: Implementing dr w. hyper v clustering

Quorum in Multi-Site Clusters• Use the Node and File Share Quorum

● Prevents entire-site outage from impacting quorum.● Enables creation of multiple clusters if necessary.

Hyper-V ServerHyper-V Server

iSCSIStorage

iSCSIStorage

Backup Site

NetworkSwitch

NetworkSwitch

NetworkSwitch

NetworkSwitch

Witness Server

Witness Site

Third Site for Witness Server

Page 39: Implementing dr w. hyper v clustering

I Need a Third Site? Seriously?• Here’s where Microsoft’s ridiculous quorum notion gets

unnecessarily complicated…

• What happens if you put the quorum’s file share in the primary site?

● The secondary site might not automatically come online after a primary site failure.

● Votes in secondary site < Votes in primary site

● Let’s count on our fingers…

Page 40: Implementing dr w. hyper v clustering

I Need a Third Site? Seriously?• Here’s where Microsoft’s ridiculous quorum notion gets

unnecessarily complicated…

• What happens if you put the quorum’s file share in the secondary site?

● A failure in the secondary site could cause the primary site to go down.

● Votes in secondary site > votes in primary site.● More fingers…

This problem gets even weirder as time passes and the number of servers changes in each site.

Page 41: Implementing dr w. hyper v clustering

I Need a Third Site? Seriously?

Hyper-V ServerHyper-V Server

iSCSIStorage

iSCSIStorage

Backup Site

NetworkSwitch

NetworkSwitch

NetworkSwitch

NetworkSwitch

Witness Server

Witness Site

Third Site for Witness Server

Page 42: Implementing dr w. hyper v clustering

-DEMO-Multi-Site Clustering

Page 43: Implementing dr w. hyper v clustering

Multi-Site Cluster Tips/Tricks• Install servers to sites so that your primary site always

contains more servers than backup sites.● Eliminates some problems with quorum during site outage.

Page 44: Implementing dr w. hyper v clustering

Multi-Site Cluster Tips/Tricks

• Manage Preferred Owners & Persistent Mode options.● Make sure your servers fail over to servers in the same site first.● But also make sure they have options on failing over elsewhere.

Page 45: Implementing dr w. hyper v clustering

Multi-Site Cluster Tips/Tricks

Page 46: Implementing dr w. hyper v clustering

Multi-Site Cluster Tips/Tricks

• Manage Preferred Owners & Persistent Mode options.● Make sure your servers fail over to servers in the same site first.● But also make sure they have options on failing over elsewhere.

• Consider carefully the effects of Failback.● Failback is a great solution for resetting after a failure.● But Failback can be a massive problem-causer as well.● Its effects are particularly pronounced in Multi-Site Clusters.● Recommendation: Turn it off, (until you’re ready).

Page 47: Implementing dr w. hyper v clustering

Multi-Site Cluster Tips/Tricks

Page 48: Implementing dr w. hyper v clustering

Multi-Site Cluster Tips/Tricks

• Resist creating clusters that support other services.● A Hyper-V cluster is a Hyper-V cluster is a Hyper-V cluster.

Page 49: Implementing dr w. hyper v clustering

Multi-Site Cluster Tips/Tricks

• Resist creating clusters that support other services.● A Hyper-V cluster is a Hyper-V cluster is a Hyper-V cluster.

• Use disk “dependencies” as Affinity/Anti-Affinity rules.● Hyper-V all by itself doesn’t have an elegant way to affinitize.● Setting disk dependencies against each other is a work-around.

Page 50: Implementing dr w. hyper v clustering

Multi-Site Cluster Tips/Tricks

• Resist creating clusters that support other services.● A Hyper-V cluster is a Hyper-V cluster is a Hyper-V cluster.

• Use disk “dependencies” as Affinity/Anti-Affinity rules.● Hyper-V all by itself doesn’t have an elegant way to affinitize.● Setting disk dependencies against each other is a work-around.

• Add Servers in Pairs● Ensures that a server loss won’t cause site split brain.● This is less a problem with the File Share Witness configuration.

Page 51: Implementing dr w. hyper v clustering

Multi-Site Cluster Tips/Tricks

• Segregate traffic!!!

Page 52: Implementing dr w. hyper v clustering

Most Important!

• Ensure that networking remains available when VMs migrate from primary to backup site.

Page 53: Implementing dr w. hyper v clustering

Most Important!

• Ensure that networking remains available when VMs migrate from primary to backup site.

• Clustering can span subnets!This is good, but only if you plan for it…

● Remember that crossing subnets also means changing IP address, subnet mask, gateway, etc, at new site.

● This can be automatically done by using DHCP and dynamic DNS, or must be manually updated.

● DNS replication is also a problem. Clients will require time to update their local cache.

● Consider reducing DNS TTL or clearing client cache.

Page 54: Implementing dr w. hyper v clustering

Implementing Affordable Disaster Recovery with Hyper-V and

Multi-Site Clustering

Greg Shields, MVPPartner and Principal Technologist

www.ConcentratedTech.com

Page 55: Implementing dr w. hyper v clustering

This slide deck was used in one of our many conference presentations. We hope you enjoy it, and invite you to use it

within your own organization however you like.

For more information on our company, including information on private classes and upcoming conference appearances, please

visit our Web site, www.ConcentratedTech.com.

For links to newly-posted decks, follow us on Twitter:@concentrateddon or @concentratdgreg

This work is copyright ©Concentrated Technology, LLC