The latest updates in vSphere 6.5 and VSAN 6.5
Posted by Askar Kopbayev on
October 18, 2016
This day has come – vSphere 6.5 has been just announced. As many of you I have been waiting for the presentation of new vSphere during VMworld event in the USA, but I guess VMware preferred to use vSphere 6.5 as a treat for those who were in doubt whether to attend VMworld Europe or not after all VMworld US were made available online to everyone; or perhaps VMware hasn’t decided what features should be included into the GA release.
In this post, I will try to cover all new features of vSphere 6.5 and VSAN 6.5, but if I missed something feel free to let me know by leaving a comment.
To be honest, there is so much to talk about and some of the new features require separate posts to be explained properly. Therefore, please don’t expect detailed review of the every single feature.. This is more ‘What’s new in vSphere 6.5 and VSAN 6.5′ overview, but in the future posts I will be talking about some of the most interesting improvements and enhancements in detail.
Don’t Fear but Respect Redirected IO with Shared VHDX
Posted by Didier Van Hoye on
August 25, 2016
When we got Shared VHDX in Windows Server 2012 R2 we were quite pleased as it opened up the road to guest clustering (Failover clustering in virtual machines) without needing to break through the virtualization layer with iSCSI or virtual Fibre Channel (vFC).
First of all, you need to be aware of the limits of using a shared VHDX in Windows Server 2012 R2.
- You cannot perform storage live migration
- You cannot resize the VHDX online
- You cannot do host based backups (i.e. you need to do in guest backups)
- No support for checkpoints
- No support for Hyper-V Replica
If you cannot live with these, that’s a good indicator this is not for you. But if you can, you should also take care of the potential redirected IO impact that can and will occur. This doesn’t mean it won’t work for you, but you need to know about it, design and build for it and test it realistically for your real life workloads.
How to Configure Storage Replication using Windows Server 2016? – Part 2
Posted by Charbel Nemnom on
February 3, 2016
Warning: This article is written with information related to Windows Server 2016 Technical Preview 4.
In part one of this multi part blog on How to Configure Storage Replication in Windows Server 2016, we covered an introduction into Storage Replica which is a new feature introduced in Windows Server 2016, and we covered step by step the implementation of Windows Volume Replication (Server-to-server). In this follow up post, we are going to cover the implementation of volume replication with stretch cluster. This type of cluster features uses Asymmetric storage, two sites, two sets of shared storage and uses volume replication to ensure that data is available to all nodes in the cluster.
ISCSI: LACP vs. MPIO
Posted by Anton Kolomyeytsev on
March 31, 2015
Here is a comparison of two technologies with similar task but different methods of accomplishing it – Link Aggregation Control Protocol (LACP) and Multipath I/O (MPIO). Both are aimed at providing higher throughput when one connection can’t handle the task. To achieve that, LACP bundles several physical ports into a single logical channel. MPIO, on the other hand, utilizes more than one physical path, even if the working application does not support more than one connection. Both technologies seem to be equally effective at first glance, but further study confirms that one of them is better at achieving its goal. The post is practical, so expect detailed research with screenshots and complete analysis of the technologies in a test case.
Storage Replica: “Shared Nothing” Hyper-V Guest VM Cluster
Posted by Anton Kolomyeytsev on
October 30, 2014
This post is about Microsoft Storage Replica, a new solution introduced by Microsoft in Windows Server 2016. Basically, it enables replication between two servers, clusters or inside a cluster, also being capable of copying data between volumes on the same server. Storage Replica is often utilized for Disaster Recovery, allowing the user to replicate data to a remote site, thus being able to recover from complete physical failure on the main location. The post is dedicated to building a “shared nothing” cluster. It is an experimental part of a series and features a “Shared Nothing” Hyper-V guest VM cluster. As always, there is a detailed instruction as to how to create the subject setup and there are results for everyone to check.