MENU
VMware Horizon FLEX. The architecture and the key features
Posted by Alex Samoylenko on October 20, 2016
4.5/5 (2)

As far back as 2 years ago at VMworld Europe 2014 conference VMware announced the release of the VMware Horizon FLEX solution. It is a virtualization platform for desktops, which allows running virtual machines locally on users’ computers, both on Mac and with Windows, while connection to the company’s datacenter isn’t available. At the same time, the virtual machine utilized by the user can be Windows-based or have guest OS Linux inside.

Many virtual infrastructures administrators still remember that before VMware had VMware ACE and VMware View Local Mode products, which passed long ago and now have been replaced with FLEX technology. Since VMware Horizon FLEX 1.9 version, as well as the new versions of Workstation and Fusion desktop platforms, has been released just recently, let’s look closer at the FLEX solution and consider its key features.

VMware Horizon FLEX isn’t a standalone product, but a mixed technology based on three solutions:

three solutions of VMware Horizon FLEX

(more…)

Please rate this

Windows Server 2016 Hyper-V Backup Rises to the challenges
Posted by Didier Van Hoye on September 19, 2016
4.75/5 (4)

Introduction

In Windows Sever 2016 Microsoft improved Hyper-V backup to address many of the concerns mentioned in our previous Hyper-V backup challenges Windows Server 2016 needs to address:

  • They avoid the need for agents by making the API’s remotely accessible. It’s all WMI calls directly to Hyper-V.
  • They implemented their own CBT mechanism for Windows Server 2016 Hyper-V to reduce the amount of data that needs to be copied during every backup. This can be leveraged by any backup vendor and takes away the responsibility of creating CBT from the backup vendors. This makes it easier for them to support Hyper-V releases faster. This also avoids the need for inserting drivers into the IO path of the Hyper-V hosts. Sure the testing & certification still has to happen as all vendors now can be impacted by a bug MSFT introduced.
  • They are no longer dependent on the host VSS infrastructure. This eliminates storage overhead as wells as the storage fabric IO overhead associated with performance issues when needing to use host level VSS snapshots on the entire LUN/CSV for even a single VM.
  • This helps avoid the need for hardware VSS providers delivered by storage vendors and delivers better results with storage solution that don’t offer hardware providers.
  • Storage vendors and backup vendors can still integrate this with their snapshots for speedy and easy backup and restores. But as the backup work at the VM level is separated from an (optional) host VSS snapshot the performance hit is less and the total duration significantly reduced.
  • It’s efficient in regard to the number of data that needs to be copied to the backup target and stored there. This reduces capacity needed and for some vendors the almost hard dependency on deduplication to make it even feasible in regards to cost.
  • These capabilities are available to anyone (backup vendors, storage vendors, home grown PowerShell scripts …) who wishes to leverage them and doesn’t prevent them from implementing synthetic full backups, merge backups as they age etc. It’s capable enough to allow great backup solutions to be built on top of it.

Let’s dive in together and take a closer look.

Windows Server 2016 Hyper-V backup
(more…)

Please rate this

Hyper-V backup challenges Windows Server 2016 needs to address
Posted by Didier Van Hoye on September 12, 2016
4.8/5 (5)

Introduction

Personally I have been very successful at providing good backup designs for Hyper-V in both small to larger environments using budgets that range in between “make due” to “well-funded”.  How does one achieve this? Two factors. The first factor is knowing the strengths and limitations of the various Hyper-V versions when you design the backup solution. Bar the ever better scalability, performance and capabilities with each new version of Hyper-V, the improvements in back up from 2012 to 2012 R2 for example were a prime motivator to upgrade. The second factor of success is due to the fact that I demand a mandate and control over the infrastructure stack to do so. In many case you are not that lucky and can’t change much in already existing environments. Sometimes not even in new environments when the gear, solutions have already been chosen, purchased and the design is deployed before you get involved.

Windows Server 2008 (R2) - 2012 Hyper-V Backup
(more…)

Please rate this

Storage Replica: “Shared Nothing” Hyper-V Guest VM Cluster
Posted by Anton Kolomyeytsev on October 30, 2014
5/5 (2)

This post is about Microsoft Storage Replica, a new solution introduced by Microsoft in Windows Server 2016. Basically, it enables replication between two servers, clusters or inside a cluster, also being capable of copying data between volumes on the same server. Storage Replica is often utilized for Disaster Recovery, allowing the user to replicate data to a remote site, thus being able to recover from complete physical failure on the main location. The post is dedicated to building a “shared nothing” cluster. It is an experimental part of a series and features a “Shared Nothing” Hyper-V guest VM cluster. As always, there is a detailed instruction as to how to create the subject setup and there are results for everyone to check.

storage replication

(more…)

Please rate this

Storage Replica: “Shared Nothing” Hyper-V HA VM Cluster
Posted by Anton Kolomyeytsev on October 29, 2014
4.67/5 (3)

This post is dedicated to the new solution from Microsoft – the Microsoft Storage Replica. It is a practical part of a series of posts about this technology and features a “Shared Nothing” Hyper-V HA VM Cluster in practice. Microsoft Storage Replica is designed to perform replication between various media: servers, clusters, volumes inside a server, etc. It’s typical usage scenario is Disaster Recovery, which is essential for data protection in case anything happens to the main location. Critical data is replicated to a remote site, often located hundreds and thousands of miles away for better data safety. The experiment is performed by StarWind engineers, so the post contains detailed instructions and a comprehensive conclusion.

storage replication

(more…)

Please rate this