Cluster Rolling Upgrade from Windows Server 2012 R2 to Windows Server 2016
Posted by Boris Yurchenko on January 11, 2018
No ratings yet.

During its lifetime, any system reaches a point when it needs to be upgraded, either in terms of hardware or software. Today, I will talk about such changes, in particular, about upgrading Windows Failover Cluster nodes from Windows Server 2012 R2 to Windows Server 2016 with no production interruption. Thanks to Microsoft, we do have a Cluster Rolling Upgrade procedure at our fingertips, and I am going to get through it and confirm it works for virtualized disks as cluster shared volumes in Windows Failover Cluster. This procedure assumes rebuilding nodes with clean OS deployment one by one, while the production keeps running from the other cluster node.

To begin with, I have a 2-node Windows Failover Cluster with Windows Server 2012 R2 installed on the nodes. The cluster has got 2 CSVs along with the Quorum. The whole system is configured in a hyperconverged scenario.

Joining a node to a Windows Failover Cluster

(more…)

Please rate this

Combining Virtual SAN (vSAN) with Microsoft Storage Spaces for greater Performance and better Resiliency
Posted by Vitalii Feshchenko on December 13, 2017
5/5 (1)

Introduction

Previously, we went through the Storage Spaces configuration journey.

The latest step was the creation of the storage pool and the virtual disk.

Today I would like to proceed from that point on and create Highly Available (HA) devices with StarWind Virtual SAN on Storage Spaces as an underlying storage. The main goal of this post is to run the performance tests of StarWind Highly Available (HA) devices located on Storage Spaces created in different ways (Simple and Mirror). StarWind HA devices will be mirrored between two hosts via a 40Gbps synchronization channel.

StarWind HA with Storage Spaces environment diagram

 

(more…)

Please rate this

Using a VEEAM off-host backup proxy server for backing up Windows Server 2016 Hyper-V Hosts
Posted by Didier Van Hoye on December 5, 2017
5/5 (1)

Introduction

Many years ago, I wrote a white paper on how to configure a VEEAM Off-host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster that uses a hardware VSS provider with VEEAM Backup & Replication 7.0.  It has aged well and you can still use it as a guide to set it all up. But in this article, I revisit the use of a hardware VSS provider dedicated specifically to some changes in Windows Server 2016 and its use by Veeam Backup & Replication v9.5 or later. The information here is valid for any good hardware VSS provider like the one StarWind Virtual SAN provides (see Do I need StarWind Hardware VSS provider?)

VSS list of events

(more…)

Please rate this

Take a look at Storage QoS Policies in Windows Server 2016
Posted by Didier Van Hoye on November 21, 2017
5/5 (1)

Introduction

In Windows Server 2016 Microsoft introduced storage Quality of Service (QoS) policies.  Previously in Windows Server 2012 R2, we could set minimum and maximum IOPS individually virtual hard disk but this was limited even if you could automate it with PowerShell. The maximum was enforced but the minimum not. That only logged a warning if it could be delivered and it took automation that went beyond what was practical for many administrators when it needed to be done at scale. While it was helpful and I used it in certain scenarios it needed to mature to deliver real value and offer storage QoS in environments where cost-effective, highly available storage was used that often doesn’t include native QoS capabilities for use with Hyper-V.

status of the flow via PoweShell

(more…)

Please rate this

Introducing Microsoft ‘Project Honolulu’
Posted by Nicolas Prigent on November 7, 2017
5/5 (1)

Project Honolulu image

Microsoft continues to invest and expand its PowerShell Scripting Environment but sometimes it is necessary to use a graphical interface in order to manage systems. This is the reason why Microsoft also develops a new management tool called “Project Honolulu”. Honolulu is the modern evolution of traditional MMC, first introduced in 2000. Now, it’s time to update our management tools!

So, Microsoft has introduced the Technical Preview of Project Honolulu at MSIgnite, a new way for managing your Windows Servers from a new browser-based graphical management tool with HTML5. Microsoft said “Our vision is to deliver a secure platform. […] For us, modernizing the platform means giving users greater flexibility in how and where they deploy and access the tools. […] Some Windows Server capabilities, which were previously manageable only via PowerShell, now also have an easy-to-use graphical experience”.

In this article, I will describe how to download and install Honolulu.

(more…)

Please rate this

Deploying SQL Server 2016 Basic Availability Groups Without Active Directory. Part 1: Building the Platform
Posted by Edwin M Sarmiento on October 31, 2017
5/5 (5)

Introduction

When Availability Groups were introduced in SQL Server 2012, they were only available in Enterprise Edition. This made it challenging to move from Database Mirroring to Availability Groups, especially if you’re running Standard Edition.  To upgrade and migrate from Database Mirroring in Standard Edition, you either choose to upgrade to a more expensive Enterprise Edition license and implement Availability Groups or stick with Database Mirroring and hope that everything works despite being deprecated.

SQL Server 2016 introduced Basic Availability Groups in Standard Edition, allowing customers to run some form of limited Availability Groups. Customers now have a viable replacement for Database Mirroring in Standard Edition. However, unlike Database Mirroring, Availability Groups require a Windows Server Failover Cluster (WSFC). SQL Server database administrators now need to be highly skilled in designing, implementing and managing a WSFC outside of SQL Server. Because the availability of the SQL Server databases relies heavily on the WSFC.

SQL Server 2016 logo

(more…)

Please rate this

How to configure a Multi-Resilient Volume on Windows Server 2016 using Storage Spaces
Posted by Vitalii Feshchenko on October 24, 2017
5/5 (3)

Introduction

Plenty of articles have been released about Storage Spaces and everything around this topic. However, I would like to absorb all actual information and lead you through the journey of configuring Storage Spaces on a Standalone host.

The main goal of the article is to show a Multi-Resilient Volume configuration process.

How it works

In order to use Storage Spaces, we need to have faster (NVMe, SSD) and slower (HDD) devices.

So, we have a set of NVMe devices along with SAS HDD or SATA HDD, and we should create performance and capacity tier respectively.

NVMe tier is used for caching. When hot blocks are written to the storage array, they are written to the caching tier first (SSD’s or NVMe):

Data in Performance Tier

(more…)

Please rate this

SMB Direct in a Windows Server 2016 Virtual Machine Experiment
Posted by Didier Van Hoye on October 12, 2017
5/5 (1)

Introduction

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Traditionally, in order for SMB Direct to work, the SMB stack needs direct access to the RDMA NICs. This means that right up to Windows Server 2012 R2 we had SMB Direct on running on physical NICs on the host or the parent partition/management OS. You could not have RDMA exposed on a vNIC or even on a host native NIC team (LBFO). SMB Direct was also not compatible with SR-IOV. That was and still is, for that OS version common knowledge and a design consideration. With Windows Server 2016, things changed. You can now have RDMA exposed on a vSwitch and on management OS vNICs. Even better, the new Switch Embedded Teaming (SET) allows for RDMA to be exposed in the same way on top of a vSwitch. SET is an important technology in this as RDMA is still not exposed on a native Windows team (LBFO).

Mellanox InfiniBand Router

(more…)

Please rate this

The importance of IeeePriorityTag with converged RDMA Switch Embedded Teaming
Posted by Didier Van Hoye on September 27, 2017
No ratings yet.

Introduction

If you read my blog on Switch Embedded Teaming with RDMA (for SMB Direct) you’ll notice that I set the -IeeePriorityTag to “On” on the vNICs that use DCB for QoS. This requires some explanation.

When you configure a Switch Embedded Teaming (SET) vSwitch and define one or more management OS vNICs on which you enable RDMA you will see that the SMB Direct traffic gets it priority tag set correctly. This always happens no matter what you set the -IeeePriorityTag option to. On or Off, it doesn’t make a difference. It works out of the box.

mapped RDMA vNIC to their respective RDMA pNIC

(more…)

Please rate this

Hyper-V VMs on an NFS share on Windows Server 2016 – is that real?
Posted by Sergey Sanduliak on September 26, 2017
5/5 (2)

A few years ago, we have tried to place a VM on an NFS share. We used Windows Server 2012 because Hyper-V is a native hypervisor from Microsoft.

Now we have decided to reproduce the experiment on Windows Server 2016. Just because of boundless curiosity 😊

So, we have 2 nodes: S3n11 serves as NFS Fileserver and S3n12 takes the Hyper-V server role.

We will do exactly the same thing as we did before, but this time on Windows Server 2016 on both VMs.

Let’s start!

Hyper-V VMs on an NFS share on Windows Server

(more…)

Please rate this