Intel SPDK NVMe-oF Target Performance Tuning. Part 2: Preparing testing environment
Posted by Taras Shved on December 27, 2017


In the previous article, I’ve described 3 scenarios for testing NVMe-oF performance and skimmed through their hardware and software configuration. Again, what I wanna do, is measure how virtualization influences the NVMe-oF performance (maybe, it doesn’t at all). For this, I’m gonna examine how NVMe-oF performs on a bare metal configuration, and on an infrastructure with Hyper-V and ESXi deployed. In each case, I’ll also evaluate the performance of iSER transport using LIO and SPDK iSCSI. Now that you have the overall understanding of the project, it’s time to move on to configuring our testing environment.

NVMe-oF on bare metal


Intel SPDK NVMe over Fabrics [NVMe-oF] Target Performance Tuning. Part 1: Jump into the fire©
Posted by Taras Shved on December 20, 2017

There’s a common opinion that the performance in general and IOPS-intensive performance like NVMe over Fabrics is usually lower in virtualized environments due to the hypervisor overhead. Therefore, I’ve decided to run a series of tests to prove or knock down this belief. For this purpose, I’ll have three scenarios for measuring the performance of NVMe over Fabrics in different infrastructures: fist – on a bare metal configuration, second – with Microsoft Hyper-V deployed on the client server, and finally, with ESXi 6.5.

In each case, I’ll also compare the performance of NVMe-oF, NVMe-oF over iSER transport, and SPDK iSCSI.

NVMe over Fabrics on the client server


StarWind iSER technology support
Posted by Alex Khorolets on November 14, 2017

In the modern IT world, almost every tech guy, no matter a systems administrator or an engineer, wants his environment to show the best results that can be squeezed out of the hardware. In this article, I want you to take a look at the StarWind support of an iSER technology which stands for the iSCSI Extensions for RDMA.

There’s not much of a change in the overall system configuration. iSER is utilizing the common iSCSI protocol by using the RDMA transport service that can be used on some network adapters with hardware offload capability. This means that iSER can supply higher bandwidth, intended for large transfers of block storage data.

RDMA feature