MENU
Manage VM placement in Hyper-V cluster with VMM
Posted by Romain Serre on September 23, 2016
No ratings yet.

The placement of the virtual machines in a Hyper-V cluster is an important step to ensure performance and high availability. To make a highly available application, usually a cluster is deployed spread across two or more virtual machines. In case of a Hyper-V node is crashing, the application must keep working.

But the VM placement concerns also its storage and its network. Let’s think about a storage solution where you have several LUNs (or Storage Spaces) according to a service level. Maybe you have a LUN with HDD in RAID 6 and another in RAID 1 with SSD. You don’t want that the VM which requires intensive IO was placed on HDD LUN.

Storage Classification in Virtual Machine Manager

Learn More

Please rate this

Samsung reveals new super-fast 960 Pro and 960 Evo M.2 NVMe SSDs
Posted by Oksana Zybinskaya on September 23, 2016
No ratings yet.

Samsung announced its 960 PRO and 960 Evo, the next generation M.2 PCIe SSDs. Like the 950 Pro, the 960 Pro and 960 Evo are PCIe 3.0 x4 drives using the latest NVMe protocol for data transfer. The 960 Pro offers a peak read speed of 3.5GB/s and a peak write speed of 2.1GB/s, while the Evo offers 3.2GB/s and 1.9GB/s respectively. The 950 topped out at a mere 2.5GB/s and 1.5GB/s.

The 960 Pro and the 960 Evo are planned for release in October. The Pro starts at $329 for 512GB of storage, rising up to a cool $1,299 for a 2TB version. The Evo price goes from $129 for a 250GB version to $479 for a 1TB version.

Samsung 960 Pro M.2 NVMe SSDs
Learn More

Please rate this

Windows Server 2016 Hyper-V Backup Rises to the challenges
Posted by Didier Van Hoye on September 19, 2016
4.5/5 (2)

Introduction

In Windows Sever 2016 Microsoft improved Hyper-V backup to address many of the concerns mentioned in our previous Hyper-V backup challenges Windows Server 2016 needs to address:

  • They avoid the need for agents by making the API’s remotely accessible. It’s all WMI calls directly to Hyper-V.
  • They implemented their own CBT mechanism for Windows Server 2016 Hyper-V to reduce the amount of data that needs to be copied during every backup. This can be leveraged by any backup vendor and takes away the responsibility of creating CBT from the backup vendors. This makes it easier for them to support Hyper-V releases faster. This also avoids the need for inserting drivers into the IO path of the Hyper-V hosts. Sure the testing & certification still has to happen as all vendors now can be impacted by a bug MSFT introduced.
  • They are no longer dependent on the host VSS infrastructure. This eliminates storage overhead as wells as the storage fabric IO overhead associated with performance issues when needing to use host level VSS snapshots on the entire LUN/CSV for even a single VM.
  • This helps avoid the need for hardware VSS providers delivered by storage vendors and delivers better results with storage solution that don’t offer hardware providers.
  • Storage vendors and backup vendors can still integrate this with their snapshots for speedy and easy backup and restores. But as the backup work at the VM level is separated from an (optional) host VSS snapshot the performance hit is less and the total duration significantly reduced.
  • It’s efficient in regard to the number of data that needs to be copied to the backup target and stored there. This reduces capacity needed and for some vendors the almost hard dependency on deduplication to make it even feasible in regards to cost.
  • These capabilities are available to anyone (backup vendors, storage vendors, home grown PowerShell scripts …) who wishes to leverage them and doesn’t prevent them from implementing synthetic full backups, merge backups as they age etc. It’s capable enough to allow great backup solutions to be built on top of it.

Let’s dive in together and take a closer look.

Windows Server 2016 Hyper-V backup
Learn More

Please rate this

WS2016: Start with Windows Containers
Posted by Florent Appointaire on September 16, 2016
5/5 (1)

Windows Server

With the next release of Windows Server, 2016, who will be available during Ignite conference (end of September), a new feature will be released. It’s Windows Containers.

Learn More

Please rate this

Hyper-V backup challenges Windows Server 2016 needs to address
Posted by Didier Van Hoye on September 12, 2016
4.75/5 (4)

Introduction

Personally I have been very successful at providing good backup designs for Hyper-V in both small to larger environments using budgets that range in between “make due” to “well-funded”.  How does one achieve this? Two factors. The first factor is knowing the strengths and limitations of the various Hyper-V versions when you design the backup solution. Bar the ever better scalability, performance and capabilities with each new version of Hyper-V, the improvements in back up from 2012 to 2012 R2 for example were a prime motivator to upgrade. The second factor of success is due to the fact that I demand a mandate and control over the infrastructure stack to do so. In many case you are not that lucky and can’t change much in already existing environments. Sometimes not even in new environments when the gear, solutions have already been chosen, purchased and the design is deployed before you get involved.

Windows Server 2008 (R2) - 2012 Hyper-V Backup
Learn More

Please rate this

How VMware sees IT future. VMworld 2016. Day 1.
Posted by Alex Samoylenko on September 1, 2016
4/5 (1)

As you know, the main virtualization conference VMworld 2016 arranged by VMware is now being held in Las Vegas. On the first day of the conference several interesting announcements were made. For example, VMware Cloud Foundation, which soon will be available on the IBM platform and later with other vendors, as well, was presented. It allows to get a ready-made infrastructure at customer’s site with both necessary software and hardware components and ready, configured and integrated control and automation tools like NSX, Virtual SAN and vRealize:

VMware Cloud Foundation Learn More

Please rate this

Don’t Fear but Respect Redirected IO with Shared VHDX
Posted by Didier Van Hoye on August 25, 2016
5/5 (2)

Introduction

When we got Shared VHDX in Windows Server 2012 R2 we were quite pleased as it opened up the road to guest clustering (Failover clustering in virtual machines) without needing to break through the virtualization layer with iSCSI or virtual Fibre Channel (vFC).

First of all, you need to be aware of the limits of using a shared VHDX in Windows Server 2012 R2.

  1. You cannot perform storage live migration
  2. You cannot resize the VHDX online
  3. You cannot do host based backups (i.e. you need to do in guest backups)
  4. No support for checkpoints
  5. No support for Hyper-V Replica

If you cannot live with these, that’s a good indicator this is not for you. But if you can, you should also take care of the potential redirected IO impact that can and will occur. This doesn’t mean it won’t work for you, but you need to know about it, design and build for it and test it realistically for your real life workloads.

active guest cluster node is running on the Hyper-V host

Learn More

Please rate this

vSphere Auto Deploy
Posted by Askar Kopbayev on August 22, 2016
5/5 (1)

The Auto Deploy is one of the underestimated vSphere features. I have seen many vSphere Designs where using Auto Deploy was outlined as overcomplicating and manual build of ESXi servers was preferred. That is pretty frustrating as we, as IT professionals, strive to automate as much as possible in our day to day work.

Configuring Auto Deploy is definitely not as simple as VSAN for instance, but using Auto Deploy really pays off when you manage hundreds and thousands of ESXi hosts.

ESXi Offline Bundle

Learn More

Please rate this

Musings on Windows Server Converged Networking & Storage
Posted by Didier Van Hoye on August 19, 2016
5/5 (1)

Why you should learn about SMB Direct, RDMA & lossless Ethernet for both networking & storage solutions

fully converged Hyper-V Qos Courtesy of Microsoft
Learn More

Please rate this

Is NVMe Really Revolutionary?
Posted by Jon Toigo on August 19, 2016
4/5 (1)

To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing.  While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.

I am not against faster I/O processing, of course.  It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine.  Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle.  That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.

latency comparison

Learn More

Please rate this