Search

Latest articles

View:
Didier Van Hoye
Didier Van Hoye
Cloud and Virtualization Architect. Didier is an IT veteran with over 20 years of expertise in Microsoft technologies, storage, virtualization, and networking. Didier primarily works as an expert advisor and infrastructure architect.
Didier Van Hoye

Windows Server 2019 introduces a new SMB Mapping Option UseWriteThrough

Do you know that not every solution requires high available storage? Sometimes there is a need in functionality that gives us the option not to leverage any OS caching. Precisely this UseWriteThrough option you can find in the New-SBMMapping commandlet in Windows Server 2019. It allows a user to disable caching with SMB mappings in certain scenarios thus giving a possibility to prefer reliability over performance in non-continuously available file share scenarios. So, always check “paper proof of concepts” and find the best options for your IT infrastructure and business development!

Didier Van Hoye

Does low latency, high throughput & CPU offloading require RDMA?

Does low latency, high throughput & CPU offloading require RDMA? What? Blasphemy, how dare we even question this? In my defense, I’m not questioning anything. I am merely being curious. I’m the inquisitive kind. The need for RDMA is the premise that we have been working with ever since RDMA became available outside of HPC InfiniBand fabrics. For us working in the Windows ecosystem this was with SMB Direct. Windows Server 2012 was the OS version that introduced us to SMB Direct, which leverages RDMA.

Didier Van Hoye

SMB Direct – The State of RDMA for use with SMB 3 traffic (Part III)

The RDMA wars in regards to SMB Direct: RoCE versus iWarp.

Didier Van Hoye

Replacing a Veeam Agent for Windows host while preserving existing local or share backups

Imagine you have a server with data source volumes that are backed up to local (or a share) target backup volumes with Veeam Agent for Windows (VAW). You might or might not backup the OS as well. That server is old, has issues or has crashed beyond repair and needs to be replaced. You don’t really care all that much about the server OS potentially but you do care about your data backup history! You don’t want to lose all those restore points. Basically, we try to answer how do you replace the backup server when it’s a local Veeam Agent for Windows 2.1 deployment.

Didier Van Hoye

Using a VEEAM off-host backup proxy server for backing up Windows Server 2016 Hyper-V Hosts

Many years ago, I wrote a white paper on how to configure a VEEAM Off-host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster that uses a hardware VSS provider with VEEAM Backup & Replication 7.0.  It has aged well and you can still use it as a guide to set it all up. But in this article, I revisit the use of a hardware VSS provider dedicated specifically to some changes in Windows Server 2016 and its use by Veeam Backup & Replication v9.5 or later. The information here is valid for any good hardware VSS provider like the one VSAN from StarWind provides (see Do I need StarWind Hardware VSS provider?)

Didier Van Hoye

Take a look at Storage QoS Policies in Windows Server 2016

In Windows Server 2016 Microsoft introduced storage Quality of Service (QoS) policies.  Previously in Windows Server 2012 R2, we could set minimum and maximum IOPS individually virtual hard disk but this was limited even if you could automate it with PowerShell. The maximum was enforced but the minimum not. That only logged a warning if it could be delivered and it took automation that went beyond what was practical for many administrators when it needed to be done at scale. While it was helpful and I used it in certain scenarios it needed to mature to deliver real value and offer storage QoS in environments where cost-effective, highly available storage was used that often doesn’t include native QoS capabilities for use with Hyper-V.

Didier Van Hoye

SMB Direct in a Windows Server 2016 Virtual Machine Experiment

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Didier Van Hoye

The importance of IeeePriorityTag with converged RDMA Switch Embedded Teaming

If you read my blog on Switch Embedded Teaming with RDMA (for SMB Direct) you’ll notice that I set the -IeeePriorityTag to “On” on the vNICs that use DCB for QoS. This requires some explanation. When you configure a Switch Embedded Teaming (SET) vSwitch and define one or more management OS vNICs on which you enable RDMA you will see that the SMB Direct traffic gets it priority tag set correctly. This always happens no matter what you set the -IeeePriorityTag option to. On or Off, it doesn’t make a difference. It works out of the box.