Why moving from Windows Server 2012 R2 to 2016 for Hyper-V
Windows Server 2016 will be released the next month said Microsoft the last month. Windows Server 2016 brings a lot of new features compared to the last Windows Server version for Hyper-V, networking and storage. In this topic I will try to convince you to move from prior Windows Server edition to Windows Server 2016 with eight reasons.
Hyper-V new features
Hyper-V brings a lot of new features in Windows Server 2016. I won’t talk about all new features but the most important in my opinion (please read this topic for a complete list of new features in Hyper-V).
The first important feature is the production checkpoint that enables to make a checkpoint for every workload. The standard checkpoint leverages saved state to make the checkpoint. The problem is that some guest workloads don’t support saved state as SQL Server or Active Directory. Production Checkpoint leverages VSS to make the checkpoint. So all workloads are supported.
The second great feature is VHD Set (VHDS). This new format supported only for Windows Server 2016 enables to share a virtual hard disk between several virtual machines. The main advantages of VHDS compared to shared VHDX are that VHDS supports backup, replica, resizing and migration. This is the new way to implement a guest cluster with shared disks in Windows Server 2016.
Thirdly, Hyper-V can now give to a Virtual Machine a direct access to a PCI-E hardware as a GPU. This feature is called Discrete Device Assignment. This feature enables to bypass the virtualization stack to increase the performance.
To finish Microsoft has increased the reliability of the virtual machine configuration files. The virtual machine configuration files are not readable or writable. The new configuration file extension is .vmcx and the runtime file extension is .vmrs.
Switch Embedded Teaming
In Windows Server 2012R2, when you want to implement networking high availability, usually you deploy a NIC Teaming and then a virtual switch bound to the teaming virtual NIC. The main issue with NIC Teaming is the lack of support of useful feature in the root partition as vRSS or RDMA. So it is difficult to make a full converged network infrastructure, especially when using SMB3 because RDMA is missing and it result also less performance as expected because of the lack of vRSS.
Switch Embedded Teaming enables to create a virtual switch where the teaming is managed by the virtual switch itself. When you create vNIC in the root partition bound to the Switch Embedded Teaming, it supports RDMA, DCB, vRSS or VMQ. Thanks to this feature, you can converge all the traffic into two big network adapters (at least 10GB/s) even the SMB3 traffic. If you’d like to know more about Switch Embedded Teaming, you can read this topic. Switch Embedded Teaming is supported only in Hyper-V host and not in virtual machines where you can use NIC teaming.
ReFS accelerated VHDX Operation
Microsoft brings a lot of enhancement to ReFS (Resilient File System). The accelerated VHDX Operation is something that may interest you for Hyper-V. This feature accelerates the creation and the resizing of fixed VHDX and the merging of a checkpoint.
Storage Spaces Direct
Microsoft has introduced Storage Spaces Direct with Windows Server 2016. This feature enables you to use locally-attached storage devices (as SAS JBOD or internal disks with NVMe, SAS or SATA) in a cluster. Thanks to this feature you don’t need shared JBOD or an expensive SAN system for your virtual machines. This feature brings flexibility and scalability, especially when implementing hyperconverged model. A disaggregated model is also available which implies dedicated file servers with Scale-Out File Server cluster role for distributed shares.
Storage Spaces Direct is a huge evolution of the datacenter vision. Before hyperconverged solution as Storage Spaces Direct, VMWare vSAN, Nutanix or Simplivity, a SAN or a NAS was implemented to store virtual machines. These systems are expensive and not so much scalable. Thanks to solution as Storage Spaces Direct, you don’t need anymore a SAN for this usage and you can manage your datacenter in Software-Defined model.
Storage Replica is a feature that enables you to replicate your storage in synchronous or in asynchronous from one cluster to another, from a server to another, from a volume to another or in case of implementation of a stretched cluster (you can read further information here). This feature is really useful because it replicates data from block level. This means that every data on a volume is replicated to another. You can think about a cluster in a room which replicate blocks to a cluster in a second room to make a DRP. In this case you have an active cluster and a passive one. If the first cluster fails, the time to back in production is ridiculous small because you have just to make the passive cluster volume as active and start again virtual machines.
Distributed Storage QoS
Prior to Windows Server 2016, the Storage QoS policy was created per VHD(X). This means that you have had to create the policy for each VHD(X). Moreover, Hyper-V hosts was not aware that they used the same bandwidth on the SAN system and sometime the overall Storage QoS meant nothing.
Distributed Storage QoS enables to create Storage QoS policy stored in the cluster database. The created policies can be applied to several VHDX. So you can now create several storage QoS policies for different service levels (as Gold and Bronze) with different minimum and maximum IOPS. (For more information you can read this topic). Then you just have to apply these policies to the right VHDX.
Failover Cluster new features
Microsoft brings a lot of new features to its Failover Cluster feature. The first is the VM Start Order that enables you to make dependencies between groups of virtual machines to start in the right order.
Another great feature is the rolling cluster upgrade which allows us to migrate a Windows Server 2012 R2 to Windows Server 2016 easily without break and recreate a new cluster.
If you are not using System Center Virtual Machine Manager, you can also use Failover Cluster Node Fairness to load-balance virtual machines across the cluster nodes based on CPU and memory utilization.
To finish Failover Cluster is no longer limited to a single NIC per subnet for SMB Multichannel. You can add all SMB NIC or vNIC to the same subnet and Failover Cluster will configure them automatically. It is great for converged networks.
This is not exhaustive and for a complete list of Failover Cluster improvement, you can read this topic.
PowerShell Direct is a new feature that enables to open a PowerShell session from a Hyper-V host to its virtual machines without using the networking. This is a super cool feature for automation. If you don’t leverage System Center Virtual Machine Manager, you can automate the network configuration and almost everything for the virtual machine deployment. To understand how to use this feature, you can read this topic.
Thanks to Windows Server 2016 a lot of new features have been added especially for networking, storage and Hyper-V. This is a really good Operating System and you should consider to move from Windows Server 2012 R2 to Windows Server 2016.
- Hyper-V in Windows Server 2016: Reboot oder Revolution
- Instructions for Real Support Cases: iSCSI Connections Configuration in Windows Server 2012 R2 Done Right
Latest posts by Romain Serre (see all)
- Manage VM placement in Hyper-V cluster with VMM - September 23, 2016
- Why moving from Windows Server 2012 R2 to 2016 for Hyper-V - August 16, 2016
- Deploy an Azure VM from a generalized image in Azure RM portal - August 2, 2016
- Manage storage QoS Policies from VMM 2016 - June 17, 2016
- Rename VM network adapter automatically from Virtual Machine Manager 2016 - May 16, 2016