HCI mission-critical virtualization workloads require the highest reliability, uptime, and performance. To measure storage performance under various conditions, we carried out three tests.
Our 12-node setup was built according to the general recommendations for 2-node StarWind HyperConverged Appliance clusters. It was an all-NVMe environment with RAM for compute, Intel® Optane™ SSD DC P4800X Series for storage, and 2 Mellanox ConnectX-5 NICs for interconnect. To make it clear, the only thing that made these 12 servers distinct from ones we ship was Intel® Xeon® Platinum 8268 processors.
Without cache, our cluster delivered 6.7 million IOPS, 51% out of theoretical 13.2 million IOPS, which was the breakthrough performance for a production configuration. No RDMA for client access was used, we went only with iSCSI; the backbone was running over iSER. There were no proprietary technologies used.
On the second stage, we fully loaded our all-NVMe cluster. Each server carried Intel Optane NVMe drives configured as write-back cache devices.
12-node all-NVMe cluster delivered 26.834 million IOPS, 101.5% performance out of theoretical 26.4 million IOPS.
For the final stage, the same cluster was configured using NVMe-oF. We brought this protocol to Windows with Linux SPDK NVMe target VM and StarWind NVMe-oF Initiator.
After the protocol was enabled, our 12-node all-NVMe setup delivered 20.187 million IOPS, 84% performance out of theoretical 26.4 million IOPS.
Similar performance results can be obtained with any hypervisor utilizing the same technologies as we did: cache-less iSCSI, write-back caching, iSCSI shared storage powered by StarWind Virtual SAN, and Linux SPDK/DPDK target with StarWind NVMe-oF Initiator for storage interconnection.
StarWind representative will contact you shortly.
If you don't hear from us within 1 business day, please send us a notice at [email protected].