SMB Direct – The State of RDMA for use with SMB 3 traffic (Part II)
Posted by Didier Van Hoye on April 17, 2018
5/5 (1)

1 - RDMA optionsThe Flavors of RDMA & what role DCB plays.

(more…)

Please rate this

SMB Direct – The State of RDMA for use with SMB 3 traffic (Part I)
Posted by Didier Van Hoye on April 12, 2018
Tags: , , , ,
5/5 (4)

SMB Direct – The State of RMDA for use with SMB 3 traffic

What is RDMA and why do we like it.

(more…)

Please rate this

Intel SPDK NVMe-oF Target Performance Tuning. Part 2: Preparing testing environment
Posted by Taras Shved on December 27, 2017
5/5 (4)

Introduction

In the previous article, I’ve described 3 scenarios for testing NVMe-oF performance and skimmed through their hardware and software configuration. Again, what I wanna do, is measure how virtualization influences the NVMe-oF performance (maybe, it doesn’t at all). For this, I’m gonna examine how NVMe-oF performs on a bare metal configuration, and on an infrastructure with Hyper-V and ESXi deployed. In each case, I’ll also evaluate the performance of iSER transport using LIO and SPDK iSCSI. Now that you have the overall understanding of the project, it’s time to move on to configuring our testing environment.

NVMe-oF on bare metal

(more…)

Please rate this

Intel SPDK NVMe over Fabrics [NVMe-oF] Target Performance Tuning. Part 1: Jump into the fire©
Posted by Taras Shved on December 20, 2017
5/5 (1)

There’s a common opinion that the performance in general and IOPS-intensive performance like NVMe over Fabrics is usually lower in virtualized environments due to the hypervisor overhead. Therefore, I’ve decided to run a series of tests to prove or knock down this belief. For this purpose, I’ll have three scenarios for measuring the performance of NVMe over Fabrics in different infrastructures: fist – on a bare metal configuration, second – with Microsoft Hyper-V deployed on the client server, and finally, with ESXi 6.5.

In each case, I’ll also compare the performance of NVMe-oF, NVMe-oF over iSER transport, and SPDK iSCSI.

NVMe over Fabrics on the client server

(more…)

Please rate this

StarWind iSER technology support
Posted by Alex Khorolets on November 14, 2017
4.2/5 (5)

In the modern IT world, almost every tech guy, no matter a systems administrator or an engineer, wants his environment to show the best results that can be squeezed out of the hardware. In this article, I want you to take a look at the StarWind support of an iSER technology which stands for the iSCSI Extensions for RDMA.

There’s not much of a change in the overall system configuration. iSER is utilizing the common iSCSI protocol by using the RDMA transport service that can be used on some network adapters with hardware offload capability. This means that iSER can supply higher bandwidth, intended for large transfers of block storage data.

RDMA feature

(more…)

Please rate this

SMB Direct in a Windows Server 2016 Virtual Machine Experiment
Posted by Didier Van Hoye on October 12, 2017
5/5 (1)

Introduction

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Traditionally, in order for SMB Direct to work, the SMB stack needs direct access to the RDMA NICs. This means that right up to Windows Server 2012 R2 we had SMB Direct on running on physical NICs on the host or the parent partition/management OS. You could not have RDMA exposed on a vNIC or even on a host native NIC team (LBFO). SMB Direct was also not compatible with SR-IOV. That was and still is, for that OS version common knowledge and a design consideration. With Windows Server 2016, things changed. You can now have RDMA exposed on a vSwitch and on management OS vNICs. Even better, the new Switch Embedded Teaming (SET) allows for RDMA to be exposed in the same way on top of a vSwitch. SET is an important technology in this as RDMA is still not exposed on a native Windows team (LBFO).

Mellanox InfiniBand Router

(more…)

Please rate this

The importance of IeeePriorityTag with converged RDMA Switch Embedded Teaming
Posted by Didier Van Hoye on September 27, 2017
5/5 (1)

Introduction

If you read my blog on Switch Embedded Teaming with RDMA (for SMB Direct) you’ll notice that I set the -IeeePriorityTag to “On” on the vNICs that use DCB for QoS. This requires some explanation.

When you configure a Switch Embedded Teaming (SET) vSwitch and define one or more management OS vNICs on which you enable RDMA you will see that the SMB Direct traffic gets it priority tag set correctly. This always happens no matter what you set the -IeeePriorityTag option to. On or Off, it doesn’t make a difference. It works out of the box.

mapped RDMA vNIC to their respective RDMA pNIC

(more…)

Please rate this

Forcing the affinity of a virtual NIC to a physical NIC with a SET vSwitch via Set-VMNetworkAdapterTeamMapping
Posted by Didier Van Hoye on September 20, 2017
5/5 (2)

Introduction

Window Server 2016 Hyper-V brought us Switch Embedded teaming (SET). That’s the way forward when it comes to converged networking and Software-Defined Networking with the network controller and network virtualization.  It also allows for the use of RDMA on a management OS virtual NIC (vNIC).

One of the capabilities within SET is affinitizing a vNIC to a particular team member, that is a physical NIC (pNIC). This isn’t a hard requirement for SET to work properly but it helps in certain scenarios. With a vNIC we mean either a management OS vNIC or a virtual machine vNIC actually, affinitizing can be done for both. The main use case and focus here and in real life is in the management OS vNICs we use for SMB Direct traffic.

complete Switch Embedded Teaming configuration

(more…)

Please rate this

Why do we always see Responder CQE Errors with RoCE RDMA?
Posted by Didier Van Hoye on June 2, 2017
5/5 (1)

Why do we always see Responder CQE Errors with RoCE RDMA?

Anyone who has configured and used SMB Direct with RoCE RDMA Mellanox cards appreciates the excellent diagnostic counters Mellanox provides for use with Windows Performance Monitor. They are instrumental when it comes to finding issues and verifying everything is working correctly.

Many have complained about the complexity of DCB configuration but in all earnest, any large network under congestion which needs specialized configurations has challenges due to scale. This is no different for DCB. You need the will to tackle the job at hand and do it right. Doing anything at scale reliable and consistent means automating it.  Lossless Ethernet, mandatory or not, requires DCB to shine. There is little other choice today until networking technology & newer hardware solutions take an evolutionary step forward. I hope to address this in a future article. But, this I not what we are going to discuss here. We’ve moved beyond that challenge. We’ll talk about one of the issues that confuse a lot of people.

Responder CQE errors report after virtual machines migration from Hyper-V cluster

(more…)

Please rate this

Musings on Windows Server Converged Networking & Storage
Posted by Didier Van Hoye on August 19, 2016
5/5 (3)

Why you should learn about SMB Direct, RDMA & lossless Ethernet for both networking & storage solutions

fully converged Hyper-V Qos Courtesy of Microsoft

Server, Hypervisor, Storage

Too many people still perceive Windows Server as “just” an operating system (OS). It’s so much more. It’s an OS, a hypervisor, a storage platform with a highly capable networking stack. Both virtualization and cloud computing are driving the convergence of all the above these roles forward fast, with intent and purpose. We’ll position the technologies & designs that convergence requires and look at the implications of these for a better overall understanding of this trend.
(more…)

Please rate this