September, 20 1pm PT
Live StarWind presentation
Meet industry-first
software-defined NVMe
over Fabrics
Target and Initiator
for Microsoft Hyper-V and
VMware vSphere
Posted by Didier Van Hoye on April 17, 2018
SMB Direct – The State of RDMA for use with SMB 3 traffic (Part II)

The Flavors of RDMA & what role DCB plays.

Learn More

Posted by Didier Van Hoye on April 12, 2018
SMB Direct – The State of RDMA for use with SMB 3 traffic (Part I)

What is RDMA and why do we like it.

Learn More

Posted by Taras Shved on December 27, 2017
Intel SPDK NVMe-oF Target Performance Tuning. Part 2: Preparing testing environment

In the previous article, I’ve described 3 scenarios for testing NVMe-oF performance and skimmed through their hardware and software configuration. Again, what I wanna do, is measure how virtualization influences the NVMe-oF performance (maybe, it doesn’t at all). For this, I’m gonna examine how NVMe-oF performs on a bare metal configuration, and on an infrastructure with Hyper-V and ESXi deployed. In each case, I’ll also evaluate the performance of iSER transport using LIO and SPDK iSCSI. Now that you have the overall understanding of the project, it’s time to move on to configuring our testing environment.

Learn More

Posted by Taras Shved on December 20, 2017
Intel SPDK NVMe over Fabrics [NVMe-oF] Target Performance Tuning. Part 1: Jump into the fire©

There’s a common opinion that the performance in general and IOPS-intensive performance like NVMe over Fabrics is usually lower in virtualized environments due to the hypervisor overhead. Therefore, I’ve decided to run a series of tests to prove or knock down this belief. For this purpose, I’ll have three scenarios for measuring the performance of NVMe over Fabrics in different infrastructures: fist – on a bare metal configuration, second – with Microsoft Hyper-V deployed on the client server, and finally, with ESXi 6.5.

Learn More

Posted by Alex Khorolets on November 14, 2017
StarWind iSER technology support

In the modern IT world, almost every tech guy, no matter a systems administrator or an engineer, wants his environment to show the best results that can be squeezed out of the hardware. In this article, I want you to take a look at the StarWind support of an iSER technology which stands for the iSCSI Extensions for RDMA. There’s not much of a change in the overall system configuration. iSER is utilizing the common iSCSI protocol by using the RDMA transport service that can be used on some network adapters with hardware offload capability. This means that iSER can supply higher bandwidth, intended for large transfers of block storage data.

Learn More

Posted by Didier Van Hoye on October 12, 2017
SMB Direct in a Windows Server 2016 Virtual Machine Experiment

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Learn More

Posted by Didier Van Hoye on September 27, 2017
The importance of IeeePriorityTag with converged RDMA Switch Embedded Teaming

If you read my blog on Switch Embedded Teaming with RDMA (for SMB Direct) you’ll notice that I set the -IeeePriorityTag to “On” on the vNICs that use DCB for QoS. This requires some explanation. When you configure a Switch Embedded Teaming (SET) vSwitch and define one or more management OS vNICs on which you enable RDMA you will see that the SMB Direct traffic gets it priority tag set correctly. This always happens no matter what you set the -IeeePriorityTag option to. On or Off, it doesn’t make a difference. It works out of the box.

Learn More

Posted by Didier Van Hoye on September 20, 2017
Forcing the affinity of a virtual NIC to a physical NIC with a SET vSwitch via Set-VMNetworkAdapterTeamMapping

Window Server 2016 Hyper-V brought us Switch Embedded teaming (SET). That’s the way forward when it comes to converged networking and Software-Defined Networking with the network controller and network virtualization.  It also allows for the use of RDMA on a management OS virtual NIC (vNIC). One of the capabilities within SET is affinitizing a vNIC to a particular team member, that is a physical NIC (pNIC). This isn’t a hard requirement for SET to work properly but it helps in certain scenarios. With a vNIC we mean either a management OS vNIC or a virtual machine vNIC actually, affinitizing can be done for both. The main use case and focus here and in real life is in the management OS vNICs we use for SMB Direct traffic.

Learn More

Posted by Didier Van Hoye on June 2, 2017
Why do we always see Responder CQE Errors with RoCE RDMA?

Anyone who has configured and used SMB Direct with RoCE RDMA Mellanox cards appreciates the excellent diagnostic counters Mellanox provides for use with Windows Performance Monitor. They are instrumental when it comes to finding issues and verifying everything is working correctly.

Learn More

Posted by Didier Van Hoye on August 19, 2016
Musings on Windows Server Converged Networking & Storage

Why you should learn about SMB Direct, RDMA & lossless Ethernet for both networking & storage solutions

fully converged Hyper-V Qos Courtesy of Microsoft

Server, Hypervisor, Storage

Too many people still perceive Windows Server as “just” an operating system (OS). It’s so much more. It’s an OS, a hypervisor, a storage platform with a highly capable networking stack. Both virtualization and cloud computing are driving the convergence of all the above these roles forward fast, with intent and purpose. We’ll position the technologies & designs that convergence requires and look at the implications of these for a better overall understanding of this trend.
Learn More