Posted by Dmytro Khomenko on December 4, 2019
It’s NVMe time! How StarWind makes it possible to present true performance of PCIe SSDs over the network

iSCSI and Fiber Channel have proven themselves as reliable partners of mission-critical operations over the last two decades, true. However, despite iSCSI’s praise, it can’t properly talk to flash, thereby hindering you from achieving the performance SSDs truly promise. Fortunately, we’ve been working really hard to resolve that mishap, and we’ve finally nailed it!

Learn More

Posted by Taras Shved on December 27, 2017
Intel SPDK NVMe-oF Target Performance Tuning. Part 2: Preparing testing environment

In the previous article, I’ve described 3 scenarios for testing NVMe-oF performance and skimmed through their hardware and software configuration. Again, what I wanna do, is measure how virtualization influences the NVMe-oF performance (maybe, it doesn’t at all). For this, I’m gonna examine how NVMe-oF performs on a bare metal configuration, and on an infrastructure with Hyper-V and ESXi deployed. In each case, I’ll also evaluate the performance of iSER transport using LIO and SPDK iSCSI. Now that you have the overall understanding of the project, it’s time to move on to configuring our testing environment.

Learn More

Posted by Taras Shved on December 20, 2017
Intel SPDK NVMe over Fabrics [NVMe-oF] Target Performance Tuning. Part 1: Jump into the fire©

There’s a common opinion that the performance in general and IOPS-intensive performance like NVMe over Fabrics is usually lower in virtualized environments due to the hypervisor overhead. Therefore, I’ve decided to run a series of tests to prove or knock down this belief. For this purpose, I’ll have three scenarios for measuring the performance of NVMe over Fabrics in different infrastructures: fist – on a bare metal configuration, second – with Microsoft Hyper-V deployed on the client server, and finally, with ESXi 6.5.

Learn More