Data Locality

Now that is true Hyper-Converged Infrastructure performance

The leading hardware and software vendors show the world how efficacious production-ready hyperconverged appliance actually is. We made three storage benchmarks: cache-less iSCSI, iSCSI & Write-Back Cache, and NVMe-oF. The runs describe the various configuration options and corresponding performance results. Intel, Mellanox, StarWind Software, and SuperMicro (vendors are listed in alphabetical order) upbuild highly available cluster featuring the newest hardware platform, CPU, RAM, flash storage, and networking powered with the latest hypervisor and software-defined storage stack, and management.

HCI mission-critical virtualization workloads require the highest reliability, uptime, and breakthrough performance. To measure storage performance, three benchmark stages were carried out.

Part 1: Just cache-less iSCSI Shared storage

The first configuration is the iSCSI cache-less production-ready hyperconverged cluster built as traditional 2-node StarWind HyperConverged Appliance scaled to 12-node cluster powered up with Intel® Xeon® Platinum 8268 Processor and RAM as compute and Intel® Optane™ SSD DC P4800X Series storage capacity featured by Mellanox ConnectX-5 interconnection. The cache-less 12-node HCA cluster delivers 6.7 million IOPS, 51% out of theoretical 13.2 million IOPS. This is breakthrough performance in pure production configuration (only iSCSI without RDMA for client access is used). Backbone is running over iSER, and no other proprietary technology is used.

Part 2: Write-back cache + iSCSI shared storage

At the second stage, we built all-NVMe cluster fully loaded each server with Intel Optane NVMe configured as a Write-Back cache. All-NVMe 12-node HCA cluster delivers 26.834 million IOPS, 101.5% performance out of theoretical 26.4 million IOPS. This is a breakthrough performance for production configuration featured by a Write-Back cache.

Part 3: Linux SPDK/DPDK target + StarWind NVMe-oF Initiator

For the final third stage, the same HCI cluster was configured using SPDK NVMe target VM and StarWind NVMe-oF Initiator featuring storage interconnection. In this NVMe-oF scenario, all-NVMe 12-node cluster delivers 20.187 million IOPS, 84% performance out of theoretical 26.4 million IOPS.

Similar performance results can be obtained with any hypervisor using the same technologies we used for a 12-node HCI cluster: cache-less iSCSI, a Write-Back cache, and iSCSI shared storage powered by StarWind Virtual SAN, and SPDK/DPDK target with StarWind NVMe-oF Initiator featuring storage interconnection.