Free Webinar
November 15 | 11am PT / 2pm ET
VMware & StarWind: Guarantee data safety and constant applications availability
Speaker: Alexey Khorolets, Pre-Sales Engineer, StarWind
Posted by Vladyslav Savchenko on October 4, 2018
StarWind rPerf: All-in-one free tool for ensuring the maximum performance level of your RDMA network connections

Nowadays we often meet systems with RDMA connections configured for increasing the performance of the system. Also, it’s not a news that we can meet environments with the different OS installed. In our case, we will be looking into a vital aspect of the process – the performance of RDMA connections. So, let’s have a look at the configuration of RDMA and what we need for building and testing RDMA connections, problems that you possibly can face, and the final part is making it perform in a way that will make us happy. So, let’s start.

Learn More

Posted by Didier Van Hoye on September 27, 2018
Does low latency, high throughput & CPU offloading require RDMA?

Does low latency, high throughput & CPU offloading require RDMA? What? Blasphemy, how dare we even question this? In my defense, I’m not questioning anything. I am merely being curious. I’m the inquisitive kind. The need for RDMA is the premise that we have been working with ever since RDMA became available outside of HPC InfiniBand fabrics. For us working in the Windows ecosystem this was with SMB Direct. Windows Server 2012 was the OS version that introduced us to SMB Direct, which leverages RDMA.

Learn More

Posted by Didier Van Hoye on April 17, 2018
SMB Direct – The State of RDMA for use with SMB 3 traffic (Part II)

The Flavors of RDMA & what role DCB plays.

Learn More

Posted by Didier Van Hoye on April 12, 2018
SMB Direct – The State of RDMA for use with SMB 3 traffic (Part I)

What is RDMA and why do we like it.

Learn More

Posted by Taras Shved on December 27, 2017
Intel SPDK NVMe-oF Target Performance Tuning. Part 2: Preparing testing environment

In the previous article, I’ve described 3 scenarios for testing NVMe-oF performance and skimmed through their hardware and software configuration. Again, what I wanna do, is measure how virtualization influences the NVMe-oF performance (maybe, it doesn’t at all). For this, I’m gonna examine how NVMe-oF performs on a bare metal configuration, and on an infrastructure with Hyper-V and ESXi deployed. In each case, I’ll also evaluate the performance of iSER transport using LIO and SPDK iSCSI. Now that you have the overall understanding of the project, it’s time to move on to configuring our testing environment.

Learn More

Posted by Taras Shved on December 20, 2017
Intel SPDK NVMe over Fabrics [NVMe-oF] Target Performance Tuning. Part 1: Jump into the fire©

There’s a common opinion that the performance in general and IOPS-intensive performance like NVMe over Fabrics is usually lower in virtualized environments due to the hypervisor overhead. Therefore, I’ve decided to run a series of tests to prove or knock down this belief. For this purpose, I’ll have three scenarios for measuring the performance of NVMe over Fabrics in different infrastructures: fist – on a bare metal configuration, second – with Microsoft Hyper-V deployed on the client server, and finally, with ESXi 6.5.

Learn More

Posted by Alex Khorolets on November 14, 2017
StarWind iSER technology support

In the modern IT world, almost every tech guy, no matter a systems administrator or an engineer, wants his environment to show the best results that can be squeezed out of the hardware. In this article, I want you to take a look at the StarWind support of an iSER technology which stands for the iSCSI Extensions for RDMA. There’s not much of a change in the overall system configuration. iSER is utilizing the common iSCSI protocol by using the RDMA transport service that can be used on some network adapters with hardware offload capability. This means that iSER can supply higher bandwidth, intended for large transfers of block storage data.

Learn More

Posted by Didier Van Hoye on October 12, 2017
SMB Direct in a Windows Server 2016 Virtual Machine Experiment

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Learn More

Posted by Didier Van Hoye on September 27, 2017
The importance of IeeePriorityTag with converged RDMA Switch Embedded Teaming

If you read my blog on Switch Embedded Teaming with RDMA (for SMB Direct) you’ll notice that I set the -IeeePriorityTag to “On” on the vNICs that use DCB for QoS. This requires some explanation. When you configure a Switch Embedded Teaming (SET) vSwitch and define one or more management OS vNICs on which you enable RDMA you will see that the SMB Direct traffic gets it priority tag set correctly. This always happens no matter what you set the -IeeePriorityTag option to. On or Off, it doesn’t make a difference. It works out of the box.

Learn More

Posted by Didier Van Hoye on September 20, 2017
Forcing the affinity of a virtual NIC to a physical NIC with a SET vSwitch via Set-VMNetworkAdapterTeamMapping

Window Server 2016 Hyper-V brought us Switch Embedded teaming (SET). That’s the way forward when it comes to converged networking and Software-Defined Networking with the network controller and network virtualization.  It also allows for the use of RDMA on a management OS virtual NIC (vNIC). One of the capabilities within SET is affinitizing a vNIC to a particular team member, that is a physical NIC (pNIC). This isn’t a hard requirement for SET to work properly but it helps in certain scenarios. With a vNIC we mean either a management OS vNIC or a virtual machine vNIC actually, affinitizing can be done for both. The main use case and focus here and in real life is in the management OS vNICs we use for SMB Direct traffic.

Learn More