Free Webinar
October 11 | 11am PT / 2pm ET
Learn how to build an IT infrastructure of your dream
with Dell EMC PowerEdge 14G servers
Speaker: Ivan Talaichuk, Pre-Sales Engineer, StarWind
Didier Van Hoye
Didier Van Hoye
Didier Van Hoye is an IT veteran with over 17 years of expertise in Microsoft technologies, storage, virtualization and networking. He works mainly as a subject matter expert advisor and infrastructure architect in Wintel environments leveraging DELL hardware to build the best possible high performance solutions with great value for money. He contributes his experience and knowledge to the global community as Microsoft MVP in Hyper-V, a Veeam Vanguard, a member of the Microsoft Extended Experts Team in Belgium and a DELL TechCenter Rockstar. He does so as a blogger, author, presenter and public speaker

All posts by this author

Posted by Didier Van Hoye on September 27, 2018
Does low latency, high throughput & CPU offloading require RDMA?

Does low latency, high throughput & CPU offloading require RDMA? What? Blasphemy, how dare we even question this? In my defense, I’m not questioning anything. I am merely being curious. I’m the inquisitive kind. The need for RDMA is the premise that we have been working with ever since RDMA became available outside of HPC InfiniBand fabrics. For us working in the Windows ecosystem this was with SMB Direct. Windows Server 2012 was the OS version that introduced us to SMB Direct, which leverages RDMA.

Learn More

Posted by Didier Van Hoye on April 24, 2018
SMB Direct – The State of RDMA for use with SMB 3 traffic (Part III)

The RDMA wars in regards to SMB Direct: RoCE versus iWarp.

Learn More

Posted by Didier Van Hoye on April 17, 2018
SMB Direct – The State of RDMA for use with SMB 3 traffic (Part II)

The Flavors of RDMA & what role DCB plays.

Learn More

Posted by Didier Van Hoye on April 12, 2018
SMB Direct – The State of RDMA for use with SMB 3 traffic (Part I)

What is RDMA and why do we like it.

Learn More

Posted by Didier Van Hoye on February 14, 2018
Replacing a Veeam Agent for Windows host while preserving existing local or share backups

Imagine you have a server with data source volumes that are backed up to local (or a share) target backup volumes with Veeam Agent for Windows (VAW). You might or might not backup the OS as well. That server is old, has issues or has crashed beyond repair and needs to be replaced. You don’t really care all that much about the server OS potentially but you do care about your data backup history! You don’t want to lose all those restore points. Basically, we try to answer how do you replace the backup server when it’s a local Veeam Agent for Windows 2.1 deployment.

Learn More

Posted by Didier Van Hoye on December 5, 2017
Using a VEEAM off-host backup proxy server for backing up Windows Server 2016 Hyper-V Hosts

Many years ago, I wrote a white paper on how to configure a VEEAM Off-host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster that uses a hardware VSS provider with VEEAM Backup & Replication 7.0.  It has aged well and you can still use it as a guide to set it all up. But in this article, I revisit the use of a hardware VSS provider dedicated specifically to some changes in Windows Server 2016 and its use by Veeam Backup & Replication v9.5 or later. The information here is valid for any good hardware VSS provider like the one StarWind Virtual SAN provides (see Do I need StarWind Hardware VSS provider?)

Learn More

Posted by Didier Van Hoye on November 21, 2017
Take a look at Storage QoS Policies in Windows Server 2016

In Windows Server 2016 Microsoft introduced storage Quality of Service (QoS) policies.  Previously in Windows Server 2012 R2, we could set minimum and maximum IOPS individually virtual hard disk but this was limited even if you could automate it with PowerShell. The maximum was enforced but the minimum not. That only logged a warning if it could be delivered and it took automation that went beyond what was practical for many administrators when it needed to be done at scale. While it was helpful and I used it in certain scenarios it needed to mature to deliver real value and offer storage QoS in environments where cost-effective, highly available storage was used that often doesn’t include native QoS capabilities for use with Hyper-V.

Learn More

Posted by Didier Van Hoye on October 12, 2017
SMB Direct in a Windows Server 2016 Virtual Machine Experiment

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Learn More

Posted by Didier Van Hoye on September 27, 2017
The importance of IeeePriorityTag with converged RDMA Switch Embedded Teaming

If you read my blog on Switch Embedded Teaming with RDMA (for SMB Direct) you’ll notice that I set the -IeeePriorityTag to “On” on the vNICs that use DCB for QoS. This requires some explanation. When you configure a Switch Embedded Teaming (SET) vSwitch and define one or more management OS vNICs on which you enable RDMA you will see that the SMB Direct traffic gets it priority tag set correctly. This always happens no matter what you set the -IeeePriorityTag option to. On or Off, it doesn’t make a difference. It works out of the box.

Learn More

Posted by Didier Van Hoye on September 20, 2017
Forcing the affinity of a virtual NIC to a physical NIC with a SET vSwitch via Set-VMNetworkAdapterTeamMapping

Window Server 2016 Hyper-V brought us Switch Embedded teaming (SET). That’s the way forward when it comes to converged networking and Software-Defined Networking with the network controller and network virtualization.  It also allows for the use of RDMA on a management OS virtual NIC (vNIC). One of the capabilities within SET is affinitizing a vNIC to a particular team member, that is a physical NIC (pNIC). This isn’t a hard requirement for SET to work properly but it helps in certain scenarios. With a vNIC we mean either a management OS vNIC or a virtual machine vNIC actually, affinitizing can be done for both. The main use case and focus here and in real life is in the management OS vNICs we use for SMB Direct traffic.

Learn More