Using a VEEAM off-host backup proxy server for backing up Windows Server 2016 Hyper-V Hosts
Posted by Didier Van Hoye on December 5, 2017
5/5 (1)

Introduction

Many years ago, I wrote a white paper on how to configure a VEEAM Off-host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster that uses a hardware VSS provider with VEEAM Backup & Replication 7.0.  It has aged well and you can still use it as a guide to set it all up. But in this article, I revisit the use of a hardware VSS provider dedicated specifically to some changes in Windows Server 2016 and its use by Veeam Backup & Replication v9.5 or later. The information here is valid for any good hardware VSS provider like the one StarWind Virtual SAN provides (see Do I need StarWind Hardware VSS provider?)

VSS list of events

Learn More

Please rate this

Take a look at Storage QoS Policies in Windows Server 2016
Posted by Didier Van Hoye on November 21, 2017
5/5 (1)

Introduction

In Windows Server 2016 Microsoft introduced storage Quality of Service (QoS) policies.  Previously in Windows Server 2012 R2, we could set minimum and maximum IOPS individually virtual hard disk but this was limited even if you could automate it with PowerShell. The maximum was enforced but the minimum not. That only logged a warning if it could be delivered and it took automation that went beyond what was practical for many administrators when it needed to be done at scale. While it was helpful and I used it in certain scenarios it needed to mature to deliver real value and offer storage QoS in environments where cost-effective, highly available storage was used that often doesn’t include native QoS capabilities for use with Hyper-V.

status of the flow via PoweShell

Learn More

Please rate this

SMB Direct in a Windows Server 2016 Virtual Machine Experiment
Posted by Didier Van Hoye on October 12, 2017
5/5 (1)

Introduction

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Traditionally, in order for SMB Direct to work, the SMB stack needs direct access to the RDMA NICs. This means that right up to Windows Server 2012 R2 we had SMB Direct on running on physical NICs on the host or the parent partition/management OS. You could not have RDMA exposed on a vNIC or even on a host native NIC team (LBFO). SMB Direct was also not compatible with SR-IOV. That was and still is, for that OS version common knowledge and a design consideration. With Windows Server 2016, things changed. You can now have RDMA exposed on a vSwitch and on management OS vNICs. Even better, the new Switch Embedded Teaming (SET) allows for RDMA to be exposed in the same way on top of a vSwitch. SET is an important technology in this as RDMA is still not exposed on a native Windows team (LBFO).

Mellanox InfiniBand Router

Learn More

Please rate this

The importance of IeeePriorityTag with converged RDMA Switch Embedded Teaming
Posted by Didier Van Hoye on September 27, 2017
No ratings yet.

Introduction

If you read my blog on Switch Embedded Teaming with RDMA (for SMB Direct) you’ll notice that I set the -IeeePriorityTag to “On” on the vNICs that use DCB for QoS. This requires some explanation.

When you configure a Switch Embedded Teaming (SET) vSwitch and define one or more management OS vNICs on which you enable RDMA you will see that the SMB Direct traffic gets it priority tag set correctly. This always happens no matter what you set the -IeeePriorityTag option to. On or Off, it doesn’t make a difference. It works out of the box.

mapped RDMA vNIC to their respective RDMA pNIC

Learn More

Please rate this

Forcing the affinity of a virtual NIC to a physical NIC with a SET vSwitch via Set-VMNetworkAdapterTeamMapping
Posted by Didier Van Hoye on September 20, 2017
5/5 (2)

Introduction

Window Server 2016 Hyper-V brought us Switch Embedded teaming (SET). That’s the way forward when it comes to converged networking and Software-Defined Networking with the network controller and network virtualization.  It also allows for the use of RDMA on a management OS virtual NIC (vNIC).

One of the capabilities within SET is affinitizing a vNIC to a particular team member, that is a physical NIC (pNIC). This isn’t a hard requirement for SET to work properly but it helps in certain scenarios. With a vNIC we mean either a management OS vNIC or a virtual machine vNIC actually, affinitizing can be done for both. The main use case and focus here and in real life is in the management OS vNICs we use for SMB Direct traffic.

complete Switch Embedded Teaming configuration

Learn More

Please rate this

Why do we always see Responder CQE Errors with RoCE RDMA?
Posted by Didier Van Hoye on June 2, 2017
5/5 (1)

Why do we always see Responder CQE Errors with RoCE RDMA?

Anyone who has configured and used SMB Direct with RoCE RDMA Mellanox cards appreciates the excellent diagnostic counters Mellanox provides for use with Windows Performance Monitor. They are instrumental when it comes to finding issues and verifying everything is working correctly.

Many have complained about the complexity of DCB configuration but in all earnest, any large network under congestion which needs specialized configurations has challenges due to scale. This is no different for DCB. You need the will to tackle the job at hand and do it right. Doing anything at scale reliable and consistent means automating it.  Lossless Ethernet, mandatory or not, requires DCB to shine. There is little other choice today until networking technology & newer hardware solutions take an evolutionary step forward. I hope to address this in a future article. But, this I not what we are going to discuss here. We’ve moved beyond that challenge. We’ll talk about one of the issues that confuse a lot of people.

Responder CQE errors report after virtual machines migration from Hyper-V cluster

Learn More

Please rate this

Upgrade your CA to SKP & SHA256. Part III: Move from SHA1 to SHA256
Posted by Didier Van Hoye on February 14, 2017
4.8/5 (5)

We’re not done yet. In part II we moved from the older CSP provider to a KSP provider but now we want to start issuing certs with a SHA256 hash. That’s what we’ll do here in part III.

Move from SHA1 to SHA256

The final step is that we move from SHA1 to SHA256 and tell the CA to work with the KSP. This is a tedious job that involves creating registry files in order to change the existing registry keys we already backed up before.

We’re not done yet. In part II we moved from the older CSP provider to a KSP provider but now we want to start issuing certs with a SHA256 hash. That’s what we’ll do here in part III.

Registry editor

Learn More

Please rate this

Upgrade your CA to SKP & SHA256. Part II: Move from a CSP to KSP provider
Posted by Didier Van Hoye on February 3, 2017
5/5 (1)

Move from a CSP to KSP provider

Once you have moved to a least Windows Server 2008 R2 you can take this step. Any version below doesn’t allow for this and should be considered the end of life. Many haven’t made the move from a CSP to KSP provider yet, even when they are already running Windows Server 2012 or 2012 R2 for a few reasons. There were some issues with older clients like Windows Server 2003 and Windows XP. These were fixed with a hotfix but in all seriousness, if you’re still on those OS versions you need to move a.s.a.p. and if not there’s nothing we can do to help you. A modern and secure PKI will be the last of your worries I’m afraid. For a Microsoft reference, see Migrating a Certification Authority Key from a Cryptographic Service Provider (CSP) to a Key Storage Provider (KSP).

PKI Certifivate general

Learn More

Please rate this

Upgrade your CA to SKP & SHA256. Part I: Setting the Stage
Posted by Didier Van Hoye on January 31, 2017
4.75/5 (4)

Introduction

Many Certificate Authority servers that were installed on Windows Server 2003 never got upgraded until Microsoft ceased the support of Windows 2003. Some of those are still out there running today. A massive amount of them got set up in an era when Wi-Fi in the SME market became very popular and CA servers were deployed to easily secure access to it. To be fair, a lot of administrators didn’t wait for Windows Server 2003 support to expire and made sure their CA was more or less up to date by upgrading them in place. That alone is something to commend. However, the operating system version only introduces the capability of using modern more secure providers and algorithms. It doesn’t upgrade the ones used by the PKI automatically for you. So many of these upgrade PKI servers are still using an old cryptographic provider, the “Microsoft Strong Cryptographic Provider” (SCP) and an old hash algorithm (SHA1) that’s been deprecated (see SHA1 Deprecation: What You Need to Know) or even banned.

Max Pixe

Learn More

Please rate this

Windows Server 2016 Hyper-V Backup Rises to the challenges
Posted by Didier Van Hoye on September 19, 2016
4.67/5 (6)

Introduction

In Windows Sever 2016 Microsoft improved Hyper-V backup to address many of the concerns mentioned in our previous Hyper-V backup challenges Windows Server 2016 needs to address:

  • They avoid the need for agents by making the API’s remotely accessible. It’s all WMI calls directly to Hyper-V.
  • They implemented their own CBT mechanism for Windows Server 2016 Hyper-V to reduce the amount of data that needs to be copied during every backup. This can be leveraged by any backup vendor and takes away the responsibility of creating CBT from the backup vendors. This makes it easier for them to support Hyper-V releases faster. This also avoids the need for inserting drivers into the IO path of the Hyper-V hosts. Sure the testing & certification still has to happen as all vendors now can be impacted by a bug MSFT introduced.
  • They are no longer dependent on the host VSS infrastructure. This eliminates storage overhead as wells as the storage fabric IO overhead associated with performance issues when needing to use host level VSS snapshots on the entire LUN/CSV for even a single VM.
  • This helps avoid the need for hardware VSS providers delivered by storage vendors and delivers better results with storage solution that don’t offer hardware providers.
  • Storage vendors and backup vendors can still integrate this with their snapshots for speedy and easy backup and restores. But as the backup work at the VM level is separated from an (optional) host VSS snapshot the performance hit is less and the total duration significantly reduced.
  • It’s efficient in regard to the number of data that needs to be copied to the backup target and stored there. This reduces capacity needed and for some vendors the almost hard dependency on deduplication to make it even feasible in regards to cost.
  • These capabilities are available to anyone (backup vendors, storage vendors, home grown PowerShell scripts …) who wishes to leverage them and doesn’t prevent them from implementing synthetic full backups, merge backups as they age etc. It’s capable enough to allow great backup solutions to be built on top of it.

Let’s dive in together and take a closer look.

Windows Server 2016 Hyper-V backup
Learn More

Please rate this