Existing storage protocols — iSCSI, iSER, FC, NFS, and SMB3 — proved to be inefficient in working with network-attached flash. These protocols were designed to work with spinning disks and, consequently, inherited various resulting limitations, like single short command queue. When used with NVMe SSDs, said protocol cuts off up to 50% of performance along with increasing the CPU toll on client systems. From the financial perspective, this is a significant waste of costs as 30-40% of NVMe SSDs, CPU cores, and associated licenses need to be purchased only to compensate for the legacy protocol's inefficiency. For ROBOs and Edge deployments, this waste gets multiplied by hundreds and thousands of times simply due to the sheer number of locations.
To date, there are no built-in tools in Windows for working with network-attached NVMe SSDs. NVMe-oF, the protocol that enables SQL, M&E applications, and VMs to get all the performance from NVMe Flash, isn't available on Windows. As a result, Microsoft clients cannot get the best performance and latency for their applications. There are some NVMe-oF-ready network cards (NICs) for Windows and Windows Server applications to work with NVMe Flash over the network. Yet, upgrading the whole interconnect with NVMe-oF capable NICs and switches for all existing systems will remain a very costly project in the foreseeable future.