Free Webinar
March 7 | 11am PT / 2pm ET
Do you dream of the fastest data transfer rate?
See how to ensure max RDMA network performance!
Speaker: Vladyslav Savchenko, Pre-Sales Engineer, StarWind

StarWind NVMe over Fabrics (NVMe-oF)

Fill in the Form to Continue

Published: March 22, 2019

INTRODUCTION

NVMe is one of the hottest topics in the world of storage these days. Expectations for this technology are so high that 2019 is sometimes called a year of NVMe. Nevertheless, PCIe SSDs are still too expensive for SMB and ROBO to build all-NVMe storage infrastructures. To ensure that users fully benefit from this technology, with minimal hardware footprint, StarWind introduces a new protocol for StarWind Virtual SAN: NVMe over Fabrics (NVMe-oF).

PROBLEM

Nowadays, as flash becomes increasingly prevalent, adding one or two NVMe drives to a cluster seems a really good idea, especially, if you run some IOPS-hungry applications in it. Still, chasing higher performance, system administrators and business decision makers often forget about smart hardware utilization. Especially, this question is critical for SMB and ROBO that are often on snag budgets for IT projects. That’s why adding an NVMe drive (or two) to the cluster is often a big deal for them. However, most environments just cannot access a good part of PCIe SSD performance. Wondering why? The answer is quite straightforward: the protocols.

The traditional iSCSI, iSER, SMB3, and NFS were designed to talk to slow storage media, not flash. Their single short command queue limits NVMe drive I/O so badly that applications do not get a good part of the underlying storage performance. Of course, you’ll still get a performance boost after adding PCIe SSDs to the cluster, but the overall VM performance will be only 20% higher than with spindle drives! Let’s face it, it is just a mere fraction of the performance that NVMe drives can provide.

Serial Attached SCSI (SAS)

Serial Attached SCSI (SAS) – Single short command queue is a performance bottleneck

SOLUTION

Are there any alternatives to iSCSI-derived protocols? Yes, there is one tailored to achieve the peak NVMe drives’ performance – NVMe-oF. The single short command queue is replaced with 64 thousand command queues, 64 thousand commands each. Such design enables to reduce the latency remarkably and get all the IOPS that an NVMe drive can provide.

NVMe-oF

NVMe-oF – Networking is not a performance bottleneck anymore

The problem is that NVMe-oF is available only for Linux hypervisors so far. Microsoft Hyper-V users are left to the protocols that are proven to be inefficient for flash. This being said, another challenge these days is bringing NVMe-oF to Hyper-V. For that purpose, we added NVMe-oF support to StarWind Virtual SAN.

How successful is StarWind’s implementation of NVMe-oF? Let the numbers talk! We obtained over 2M IOPS on 4 Intel Optane SSD 900P drives in a bare-metal environment while the latency was only 10 microseconds higher than in Intel’s datasheet. In other words, the described protocol eliminates any difference between the locally connected PCIe SSDs and ones presented over the network.

CONCLUSION

When it comes to underlying storage performance, NVMe is the true king of the hill. However, it is still difficult to utilize PCIe SSDs effectively since the traditional SCSI-based protocols do not work that good for flash memory. NVMe-oF is a protocol tailored for flash. StarWind introduces NVMe-oF support to StarWind Virtual SAN so that, from now on, Hyper-V users can run their applications at full throttle.