vSphere 5 versus Windows Server 2012 Hyper-V: high available VMs
August 14, 2012 3 Comments
This is my third posting in a serie of postings in which I will compare features of VMware vSphere 5 with Microsoft Windows Server 2012 Hyper-V.
Goal of the postings is to give a non-biased overview on features of two main players in the server virtualization market: VMware vSphere and Microsoft Hyper-V. I will not use the marketing comparison tables used by both vendors to promote their unique features while ignoring the competitors features (as marketing is all about).
Other blogs in the serie are:
vSphere 5 versus Windows Server 2012 Hyper-V: storage integration
vSphere 5 versus Windows Server 2012 Hyper-V:management
vSphere 5 versus Windows Server 2012 Hyper-V: live migrations
vSphere 5 versus Windows Server 2012 Hyper-V: Resource metering for chargeback
This posting will compare high availability features of both platforms.
Protection against failed hosts
Both Hyper-V VMs (using Windows Failover Clustering) and vSphere (HA) VMs are proteced against host failures. VMs will automated restart when a node in a cluster unexpectedly failed.
Protection against network failures
Both vSphere and Windows Server 202 Hyper-V allow and support teaming of network interfaces to protect against failure of a network interface. In Windows Server 2008 the drivers of the nic vendor had to be used as Microsoft did not support teaming.
Protection against storage failures
Both vSphere and Hyper-V can use multiple paths to the storage array. They both support multipathing. If all paths from a Hyper-V node to the storage array fails, an alternative route using the network LAN can be used to reach the storage. This is called Redirected IO. vSphere does not offer such a feature.
Monitoring of VM health
vSphere HA is able to monitor the status of the guest operating system running in the VM. If the guest does not respond to a heartbeat signal in time, vSphere HA will reboot the VM. Using third party solutions which needs to be purchased the VM can also be restarted if the guest operating system is running but critical services are stopped.
Windows Server 2012 Failover Cluster Manager allows to monitor services inside the Windows guest operating system. This feature is named Virtual Machine Monitoring. To configure this option see this Microsoft blog. No additional software needed.
Protection against downtime due to host failure
VMs protected using vSphere HA/Failover Clustering will have some downtime when the host unexpectedly fails. If a vSphere VM needs to be available even when the host it runs on fails, a VMware feature names Fault Tolerance can be used. When FT is enabled on a VM, a shadow VM is created on another vSphere host. It is a copy of the primary VM and all CPU, memory and disk actions on the primary VM will be copied and replayed on the shadow VM (lockstep). If the host on which the primary VM fails, the shadow VM will take over its identity without any downtime! This feature has some restrictions in CPU and disktype usage.
Mind that a fault in the guest operating system or application will occur in the shadow as well. Fault Tolerance will not prevent downtime of the application when a service pack requires a reboot of the VM. A blue screen in Windows will be copied over to the shadow. Lockstep means two instances doing the same. If one fails, the other fails as well.
Fault Tolerance could be useful for applications which cannot be clustered or made redundant but are that critical server downtime should be avoided.
Hyper-V does not offer such a feature.
Guest clustering support
Applications running inside a guest VM using Windows Server can be protected against failures using Microsoft Clustering.
vSphere is much more restricted in using Microsoft Clustering than Hyper-V. Windows Server 2012 Hyper-V does not have restrictions on guest clustering.
See the VMware knowledgbase article listing the supported scenarios.
• Fibre Channel: Configuration using shared storage for Quorum and/or Data must be on Fibre Channel (FC) based RDMs (physical mode for cluster across boxes “CAB”, virtual mode for cluster in a box “CIB”). RDMs on storage other than FC (such as NFS or iSCSI) are not currently supported. Virtual disk based shared storage is supported with CIB configuration only and must be created using the EagerZeroedThick option on VMFS datastores.
• Native iSCSI (not in the guest OS): VMware does not currently support the use of ESX host iSCSI, also known as native iSCSI (hardware or software), initiators with MSCS.
• In-guest iSCSI software initiators: VMware fully supports a configuration of MSCS using in-guest iSCSI initiators, provided that all other configuration meets the documented, supported MSCS configuration. Using this configuration in VMware virtual machines is relatively similar to using it in physical environments.
• FCoE: FCoE is not supported at this time.