Windows Server 2016 Core configuration. Part 3: Failover Clustering
Posted by Alex Khorolets on April 3, 2018

wp-image-7719

Looking back at the previous articles in our “How-to-Core basics”, we have managed to install the Core version of Windows Server 2016. As well, the required networks were set, and the storage for the virtual machines was created.

In the final part of the trilogy, I’ll cover the steps left to prepare the environment in order to make your production highly available and fault-tolerant.

Being short, last time, we were up to installing Windows Server Core version on a single server and adding the storage as an iSCSI target. Highly available and fault-tolerant storage requires another server to create the failover cluster. There’s not much difference between the required configuration and the steps we did previously.

(more…)

How to enable Active Directory Recycle Bin in Windows Server 2016
Posted by Vladan Seget on March 1, 2018

Before we dive into how to enable Active Directory Recycle Bin in Windows Server 2016, we will first explain what it is and when Microsoft introduced this feature.

Active Directory Recycle Bin simply allows you to restore deleted objects from Active Directory. It can be a user account, computer account or a whole Organizational Unit (OU). Who did not accidentally delete an AD object in his career?

 

(more…)

Cluster Rolling Upgrade from Windows Server 2012 R2 to Windows Server 2016
Posted by Boris Yurchenko on January 11, 2018

During its lifetime, any system reaches a point when it needs to be upgraded, either in terms of hardware or software. Today, I will talk about such changes, in particular, about upgrading Windows Failover Cluster nodes from Windows Server 2012 R2 to Windows Server 2016 with no production interruption. Thanks to Microsoft, we do have a Cluster Rolling Upgrade procedure at our fingertips, and I am going to get through it and confirm it works for virtualized disks as cluster shared volumes in Windows Failover Cluster. This procedure assumes rebuilding nodes with clean OS deployment one by one, while the production keeps running from the other cluster node.

To begin with, I have a 2-node Windows Failover Cluster with Windows Server 2012 R2 installed on the nodes. The cluster has got 2 CSVs along with the Quorum. The whole system is configured in a hyperconverged scenario.

Joining a node to a Windows Failover Cluster

(more…)

Combining Virtual SAN (vSAN) with Microsoft Storage Spaces for greater Performance and better Resiliency
Posted by Vitalii Feshchenko on December 13, 2017

Introduction

Previously, we went through the Storage Spaces configuration journey.

The latest step was the creation of the storage pool and the virtual disk.

Today I would like to proceed from that point on and create Highly Available (HA) devices with StarWind Virtual SAN on Storage Spaces as an underlying storage. The main goal of this post is to run the performance tests of StarWind Highly Available (HA) devices located on Storage Spaces created in different ways (Simple and Mirror). StarWind HA devices will be mirrored between two hosts via a 40Gbps synchronization channel.

StarWind HA with Storage Spaces environment diagram

 

(more…)

Using a VEEAM off-host backup proxy server for backing up Windows Server 2016 Hyper-V Hosts
Posted by Didier Van Hoye on December 5, 2017

Many years ago, I wrote a white paper on how to configure a VEEAM Off-host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster that uses a hardware VSS provider with VEEAM Backup & Replication 7.0.  It has aged well and you can still use it as a guide to set it all up. But in this article, I revisit the use of a hardware VSS provider dedicated specifically to some changes in Windows Server 2016 and its use by Veeam Backup & Replication v9.5 or later. The information here is valid for any good hardware VSS provider like the one StarWind Virtual SAN provides (see Do I need StarWind Hardware VSS provider?)

VSS list of events

(more…)

Take a look at Storage QoS Policies in Windows Server 2016
Posted by Didier Van Hoye on November 21, 2017

Introduction

In Windows Server 2016 Microsoft introduced storage Quality of Service (QoS) policies.  Previously in Windows Server 2012 R2, we could set minimum and maximum IOPS individually virtual hard disk but this was limited even if you could automate it with PowerShell. The maximum was enforced but the minimum not. That only logged a warning if it could be delivered and it took automation that went beyond what was practical for many administrators when it needed to be done at scale. While it was helpful and I used it in certain scenarios it needed to mature to deliver real value and offer storage QoS in environments where cost-effective, highly available storage was used that often doesn’t include native QoS capabilities for use with Hyper-V.

status of the flow via PoweShell

(more…)

Introducing Microsoft ‘Project Honolulu’
Posted by Nicolas Prigent on November 7, 2017

Project Honolulu image

Microsoft continues to invest and expand its PowerShell Scripting Environment but sometimes it is necessary to use a graphical interface in order to manage systems. This is the reason why Microsoft also develops a new management tool called “Project Honolulu”. Honolulu is the modern evolution of traditional MMC, first introduced in 2000. Now, it’s time to update our management tools!

So, Microsoft has introduced the Technical Preview of Project Honolulu at MSIgnite, a new way for managing your Windows Servers from a new browser-based graphical management tool with HTML5. Microsoft said “Our vision is to deliver a secure platform. […] For us, modernizing the platform means giving users greater flexibility in how and where they deploy and access the tools. […] Some Windows Server capabilities, which were previously manageable only via PowerShell, now also have an easy-to-use graphical experience”.

In this article, I will describe how to download and install Honolulu.

(more…)

Deploying SQL Server 2016 Basic Availability Groups Without Active Directory. Part 1: Building the Platform
Posted by Edwin M Sarmiento on October 31, 2017

Introduction

When Availability Groups were introduced in SQL Server 2012, they were only available in Enterprise Edition. This made it challenging to move from Database Mirroring to Availability Groups, especially if you’re running Standard Edition.  To upgrade and migrate from Database Mirroring in Standard Edition, you either choose to upgrade to a more expensive Enterprise Edition license and implement Availability Groups or stick with Database Mirroring and hope that everything works despite being deprecated.

SQL Server 2016 introduced Basic Availability Groups in Standard Edition, allowing customers to run some form of limited Availability Groups. Customers now have a viable replacement for Database Mirroring in Standard Edition. However, unlike Database Mirroring, Availability Groups require a Windows Server Failover Cluster (WSFC). SQL Server database administrators now need to be highly skilled in designing, implementing and managing a WSFC outside of SQL Server. Because the availability of the SQL Server databases relies heavily on the WSFC.

SQL Server 2016 logo

(more…)

How to configure a Multi-Resilient Volume on Windows Server 2016 using Storage Spaces
Posted by Vitalii Feshchenko on October 24, 2017

Introduction

Plenty of articles have been released about Storage Spaces and everything around this topic. However, I would like to absorb all actual information and lead you through the journey of configuring Storage Spaces on a Standalone host.

The main goal of the article is to show a Multi-Resilient Volume configuration process.

How it works

In order to use Storage Spaces, we need to have faster (NVMe, SSD) and slower (HDD) devices.

So, we have a set of NVMe devices along with SAS HDD or SATA HDD, and we should create performance and capacity tier respectively.

NVMe tier is used for caching. When hot blocks are written to the storage array, they are written to the caching tier first (SSD’s or NVMe):

Data in Performance Tier

(more…)

SMB Direct in a Windows Server 2016 Virtual Machine Experiment
Posted by Didier Van Hoye on October 12, 2017

Introduction

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Traditionally, in order for SMB Direct to work, the SMB stack needs direct access to the RDMA NICs. This means that right up to Windows Server 2012 R2 we had SMB Direct on running on physical NICs on the host or the parent partition/management OS. You could not have RDMA exposed on a vNIC or even on a host native NIC team (LBFO). SMB Direct was also not compatible with SR-IOV. That was and still is, for that OS version common knowledge and a design consideration. With Windows Server 2016, things changed. You can now have RDMA exposed on a vSwitch and on management OS vNICs. Even better, the new Switch Embedded Teaming (SET) allows for RDMA to be exposed in the same way on top of a vSwitch. SET is an important technology in this as RDMA is still not exposed on a native Windows team (LBFO).

Mellanox InfiniBand Router

(more…)