The placement of the virtual machines in a Hyper-V cluster is an important step to ensure performance and high availability. To make a highly available application, usually a cluster is deployed spread across two or more virtual machines. In case of a Hyper-V node is crashing, the application must keep working.
But the VM placement concerns also its storage and its network. Let’s think about a storage solution where you have several LUNs (or Storage Spaces) according to a service level. Maybe you have a LUN with HDD in RAID 6 and another in RAID 1 with SSD. You don’t want that the VM which requires intensive IO was placed on HDD LUN.
Personally I have been very successful at providing good backup designs for Hyper-V in both small to larger environments using budgets that range in between “make due” to “well-funded”. How does one achieve this? Two factors. The first factor is knowing the strengths and limitations of the various Hyper-V versions when you design the backup solution. Bar the ever better scalability, performance and capabilities with each new version of Hyper-V, the improvements in back up from 2012 to 2012 R2 for example were a prime motivator to upgrade. The second factor of success is due to the fact that I demand a mandate and control over the infrastructure stack to do so. In many case you are not that lucky and can’t change much in already existing environments. Sometimes not even in new environments when the gear, solutions have already been chosen, purchased and the design is deployed before you get involved.
As you know, the main virtualization conference VMworld 2016 arranged by VMware is now being held in Las Vegas. On the first day of the conference several interesting announcements were made. For example, VMware Cloud Foundation, which soon will be available on the IBM platform and later with other vendors, as well, was presented. It allows to get a ready-made infrastructure at customer’s site with both necessary software and hardware components and ready, configured and integrated control and automation tools like NSX, Virtual SAN and vRealize:
Windows Server 2016 – Storage Spaces Direct Hyper-converged [image credit: Microsoft]
With the release of Windows Server 2016, Microsoft is introducing Storage Spaces Direct (S2D), which enables building highly available Software-Defined Storage systems with local attached storage. This storage can be leveraged by VMs running on the same cluster (in hyper-converged mode) or the storage can be presented as a File Share (in disaggregated mode). The hyper-converged deployment scenario has the Hyper-V (compute) and Storage Spaces Direct (storage) components on the same cluster. Virtual machine’s files are stored on local CSVs. Once Storage Spaces Direct is configured and the CSV volumes are available, configuring and provisioning Hyper-V is the same process and uses the same tools that you would use with any other Hyper-V deployment on a failover cluster.
Microsoft Azure provides a way to deploy Azure VM from the Marketplace or from a generalized image. When you deploy the Azure VM from the Marketplace, no customization is deployed. You have to configure the operating system as your master. When you have several Azure VMs to deploy, the customization of each system can be time consuming. A lot of companies have a master or a baseline image in a VMDK for VMWare, in a VHD(X) for Hyper-V or in a WIM image. In this topic we will see how to create a generalized image from a single Azure VM and how to deploy Azure VM from this generalized image.
In the old portal (https://manage.windowsazure.com), all steps can be done from the GUI but also by using PowerShell. In the new portal (https://portal.azure.com), you have to use PowerShell because it is not yet integrated to the portal. In this topic, I will work from the new portal (AzureRM).
As far back as at VMworld 2014, VMware announced VMware Project Fargo technology, also broadly known as VMFork. It allows to make a working copy of a running virtual machine on VMware vSphere platform very fast.
The VMFork technology involves on-the-fly creation of virtual machine clone (VMX-file and process in memory), which uses the same memory (Shared memory) that the parent VM does. At the same time, the child VM cannot write to the shared memory and uses the allocated memory to write its own data. With disks, it is just the same: with the Copy-on-write technology, the changes of the parent VM base disk are written to the child VM delta disk:
Today, we will see how to join an Ubuntu server (version 16.04) to an Active Directory domain. It could be useful in case of you want that your administrators use their domain account to connect to servers, etc.
Many of you know that VMware has a technology called vSphere Integrated Containers (VIC). It involves launch of Docker (and others) virtualized containers in small virtual machines with a lightweight operating system based on Linux distribution.
This operating system is VMware Photon OS 1.0, which has been finally released just recently. This is the first release version of this operating system from VMware, but in the long view it can become the main platform for virtual appliances by replacing the everlasting SUSE Linux.