Problem of the NVMe Performance in Hyper-V Environments
Even though the world is going crazy about NVMe, it is still challenging to present PCIe SSDs to the entire Hyper-V cluster effectively. Their latency shoots up, and applications are likely to access only half of flash performance when such disks are presented over the network. The problem is that Hyper-V VMs cannot talk effectively to PCIe SSDs. It is impossible to enjoy the NVMe drives’ performance if they are presented over iSCSI and FC. These protocols were designed to connect cold storage media disks, not flash! They have a single short command queue and entail significant I/O overhead when used to connect to NVMe drives.
When using traditional protocols in a Windows Server environment, you would need to buy more NVMe drives to get the desired number of I/O. Getting more hardware and wasting more money will lead to the same result – a lack of hardware utilization efficiency. This solution is completely budget-busting for SMBs, ROBOs, & Edge. On top of all of that, the single-queue iSCSI model creates an additional load on the server CPU. The speed of your applications cannot peak since server processor cycles are burdened with iSCSI and TCP/IP Stack processing.
SAS: Performance is bottlenecked by a single short command queue
All that being said, it seems obvious why your IOPS-hungry applications do not get the expected performance while talking to storage over the traditional protocols. Because of the protocols, adding flash to your setup usually grants you like 20% more IOPS than spindle drives can provide. But, obviously, it is nothing compared to the true PCIe SSD performance! NVMe drives need a whole other tech to be presented over network.