MENU
Manage VM placement in Hyper-V cluster with VMM
Posted by Romain Serre on September 23, 2016
No ratings yet.

The placement of the virtual machines in a Hyper-V cluster is an important step to ensure performance and high availability. To make a highly available application, usually a cluster is deployed spread across two or more virtual machines. In case of a Hyper-V node is crashing, the application must keep working.

But the VM placement concerns also its storage and its network. Let’s think about a storage solution where you have several LUNs (or Storage Spaces) according to a service level. Maybe you have a LUN with HDD in RAID 6 and another in RAID 1 with SSD. You don’t want that the VM which requires intensive IO was placed on HDD LUN.

Storage Classification in Virtual Machine Manager

(more…)

Please rate this

Don’t Fear but Respect Redirected IO with Shared VHDX
Posted by Didier Van Hoye on August 25, 2016
5/5 (2)

Introduction

When we got Shared VHDX in Windows Server 2012 R2 we were quite pleased as it opened up the road to guest clustering (Failover clustering in virtual machines) without needing to break through the virtualization layer with iSCSI or virtual Fibre Channel (vFC).

First of all, you need to be aware of the limits of using a shared VHDX in Windows Server 2012 R2.

  1. You cannot perform storage live migration
  2. You cannot resize the VHDX online
  3. You cannot do host based backups (i.e. you need to do in guest backups)
  4. No support for checkpoints
  5. No support for Hyper-V Replica

If you cannot live with these, that’s a good indicator this is not for you. But if you can, you should also take care of the potential redirected IO impact that can and will occur. This doesn’t mean it won’t work for you, but you need to know about it, design and build for it and test it realistically for your real life workloads.

active guest cluster node is running on the Hyper-V host

(more…)

Please rate this

How Transparent Page Sharing memory deduplication technology works in VMware vSphere 6.0
Posted by Alex Samoylenko on May 30, 2016
4.5/5 (2)

You may know that memory page deduplication technology Transparent Page Sharing (TPS) becomes useless with large memory pages (it’s even disabled in the latest versions of VMware vSphere). However, this doesn’t mean that TPS goes into the trash bin, because when lacking resources on the host-server, ESXi may break large pages into small ones and deduplicate them afterwards. In the process, the large pages are prepared for deduplication beforehand: in case the memory workload grows up to a certain limit the large pages a broken into small ones and then, when the workload peaks, forced deduplication cycle is activated.

Hash Table

(more…)

Please rate this

TBW from SSDs with S.M.A.R.T Values in ESXi
Posted by Oksana Zybinskaya on May 23, 2016
No ratings yet.

Solid-State-Drives are becoming widely implemented in ESXi hosts for caching (vFlash Read Cache, PernixData FVP), Virtual SAN or plain Datastores. Unfortunately, SSDs have  limited lifetime per cell. Its value may range from 1.000 times in consumer TLC SSDs up to 100.000 times in enterprise SLC based SSDs. Lifetime can be estimated by device TBW parameters provided by vendor in its specification, It describes how many Terabytes can be written to the entire device, until the warranty expires.smartctl_in_esxi

(more…)

Please rate this

A closer look at NUMA Spanning and virtual NUMA settings
Posted by Didier Van Hoye on April 28, 2016
5/5 (5)

Introduction

With Windows Server 2012 Hyper-V became truly NUMA aware.  A virtual NUMA topology is presented to the guest operating system. By default, the virtual NUMA topology is optimized by matching the NUMA topology of physical host. This enables Hyper-V to get the optimal performance for virtual machines with high performance, NUMA aware workloads where large numbers of vCPUs and lots of memory come into play. A great and well known example of this is SQL Server.

NUMA (more…)

Please rate this