MENU
Manage VM placement in Hyper-V cluster with VMM
Posted by Romain Serre on September 23, 2016
No ratings yet.

The placement of the virtual machines in a Hyper-V cluster is an important step to ensure performance and high availability. To make a highly available application, usually a cluster is deployed spread across two or more virtual machines. In case of a Hyper-V node is crashing, the application must keep working.

But the VM placement concerns also its storage and its network. Let’s think about a storage solution where you have several LUNs (or Storage Spaces) according to a service level. Maybe you have a LUN with HDD in RAID 6 and another in RAID 1 with SSD. You don’t want that the VM which requires intensive IO was placed on HDD LUN.

Storage Classification in Virtual Machine Manager

(more…)

Please rate this

Is NVMe Really Revolutionary?
Posted by Jon Toigo on August 19, 2016
4/5 (1)

To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing.  While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.

I am not against faster I/O processing, of course.  It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine.  Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle.  That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.

latency comparison

(more…)

Please rate this

How to Deploy and Manage Storage Spaces Direct Cluster using SCVMM 2016?
Posted by Charbel Nemnom on August 18, 2016
4/5 (2)

Hyper-converged stack

Windows Server 2016 – Storage Spaces Direct Hyper-converged [image credit: Microsoft]

Introduction

With the release of Windows Server 2016, Microsoft is introducing Storage Spaces Direct (S2D), which enables building highly available Software-Defined Storage systems with local attached storage. This storage can be leveraged by VMs running on the same cluster (in hyper-converged mode) or the storage can be presented as a File Share (in disaggregated mode). The hyper-converged deployment scenario has the Hyper-V (compute) and Storage Spaces Direct (storage) components on the same cluster. Virtual machine’s files are stored on local CSVs. Once Storage Spaces Direct is configured and the CSV volumes are available, configuring and provisioning Hyper-V is the same process and uses the same tools that you would use with any other Hyper-V deployment on a failover cluster.

(more…)

Please rate this

Instant Clone functionality in VMware Horizon 7 – how quickly and efficiently it works
Posted by Alex Samoylenko on July 22, 2016
No ratings yet.

As far back as at VMworld 2014, VMware announced VMware Project Fargo technology, also broadly known as VMFork. It allows to make a working copy of a running virtual machine on VMware vSphere platform very fast.

The VMFork technology involves on-the-fly creation of virtual machine clone (VMX-file and process in memory), which uses the same memory (Shared memory) that the parent VM does. At the same time, the child VM cannot write to the shared memory and uses the allocated memory to write its own data. With disks, it is just the same: with the Copy-on-write technology, the changes of the parent VM base disk are written to the child VM delta disk:

VMFork technology

(more…)

Please rate this

Comparing vSphere Distributed Switch and Cisco Nexus 1000v switch
Posted by Askar Kopbayev on July 7, 2016
5/5 (1)

When time comes to deciding whether to go with vSphere Distributed Switch or Cisco Nexus 1000v it is hard to tell which product is superior and you find many different and quite contradictory opinions.

While quite often it is the political decision based on the answer to the question “Who is going to manage the virtual networking?” there are many other aspects you, as an infrastructure designer, should be aware of.

Recently VMware announced End of Sale of Nexus 1000v which caused some confusion amongst clients. I know customers who were pretty sure Cisco discontinued Nexus 1000v, but rest assured, Cisco is still fully committed to continue development of virtual networking and to support Nexus 1000v in the latest and future versions of vSphere.

Nexus 1KV Essential and Advanced Editions

(more…)

Please rate this

Docker: Docker Datacenter in Azure
Posted by Florent Appointaire on June 24, 2016
5/5 (1)

Docker

Docker Datacenter on Azure and AWS has been announced on Tuesday 21st, June 2016 at the DockerCon.

(more…)

Please rate this

How Transparent Page Sharing memory deduplication technology works in VMware vSphere 6.0
Posted by Alex Samoylenko on May 30, 2016
4.5/5 (2)

You may know that memory page deduplication technology Transparent Page Sharing (TPS) becomes useless with large memory pages (it’s even disabled in the latest versions of VMware vSphere). However, this doesn’t mean that TPS goes into the trash bin, because when lacking resources on the host-server, ESXi may break large pages into small ones and deduplicate them afterwards. In the process, the large pages are prepared for deduplication beforehand: in case the memory workload grows up to a certain limit the large pages a broken into small ones and then, when the workload peaks, forced deduplication cycle is activated.

Hash Table

(more…)

Please rate this

NUMA and Cluster-on-die
Posted by Askar Kopbayev on May 27, 2016
4.5/5 (2)

What is NUMA?

NUMA stands for Non Unified Memory Access and Nehalem was the first generation of Intel CPUs where NUMA was presented. However, the first commercial implementation of NUMA goes back to 1985, developed in Honeywell Information Systems Italy XPS-100 by Dan Gielan.

Unified Memory Access topology

(more…)

Please rate this

Rename VM network adapter automatically from Virtual Machine Manager 2016
Posted by Romain Serre on May 16, 2016
4.4/5 (5)

The next version of Hyper-V which comes with Windows Server 2016, brings a new feature called Virtual Network Adapter Identification. This feature enables to specify a name when a network adapter is added to the virtual machine and to retrieve this same name inside the VM. This feature can be also managed from Virtual Machine Manager 2016. This feature is really great to automate the renaming of the virtual network adapters inside VMs. In this topic I’ll show you how it is working and how to automate the renaming of the network adapters with PowerShell.

harware configure
(more…)

Please rate this

SMB3: Overview
Posted by Anton Kolomyeytsev on May 10, 2016
4.5/5 (2)

This is an overview of the Server Message Block (SMB3) protocol from Microsoft. It offers a short insight into the history of SMB3 creation and development over the years (as the idea is technically around 30 years old). As of Windows Server 2012, the protocol got new features: SMB Transparent Failover, SMB Scale Out, SMB Multichannel, SMB Direct, SMB Encryption, VSS for SMB file shares, SMB Directory Leasing, SMB PowerShell. In Windows Server 2016, it also got Pre-authentication integrity and Cluster dialect fencing. The post concentrates on RDMA-capable SMB Direct and MPIO-utilizing SMB Multichannel and their benefits. Also, it is an introduction to a series of tests aimed at creating SMB 3.0 File Servers in an unusual way.

(more…)

Please rate this