Data Locality

Stage 1: Just cache-less iSCSI shared storage

Imagine taking the topnotch servers and packing them with NVMe drives presented over iSCSI. Caching is disabled. The question is, what happens with performance?

Introduction

The test environment used in this article series was built according to the recommendations for 2-node hyperconverged clusters that are a typical scenario for SMB/ROBO. This article describes the first of three benchmark stages. It shows the 12-node production-ready StarWind HCA cluster performance.

Hardware

Each node was powered with Intel® Xeon® Platinum 8268 processors, 2x Intel® Optane™ SSD DC P4800X Series drives, and Mellanox ConnectX-5 100GbE NICs connected with Mellanox LinkX® copper cables to 2 Mellanox SN2700 Spectrum™ switches. Generally speaking, we used the standard StarWind HyperConverged Appliances (Supermicro SuperServer chassis) where only CPUs were upgraded for chasing the HCI industry record.

Software

In our setup, StarWind HCA had the fastest software running: Microsoft Hyper-V (Windows Server 2019) and StarWind Virtual SAN. StarWind Virtual SAN service application runs in Windows userland and supports polling and interrupt-driven IO, allowing to boost IO performance by turning CPU cycles into IOPS. To get higher performance, we also developed TCP loopback and iSCSI loopback drivers fixing load-balance issues in aging Microsoft iSCSI Initiator.

On our website, you can learn more about hyperconverged infrastructures powered by StarWind Virtual SAN.

12-node StarWind HyperConverged Appliance cluster specifications:

Platform: Supermicro SuperServer 2029UZ-TR4+
CPU: 2x Intel® Xeon® Platinum 8268 Processor 2.90 GHz, RAM 96GB. Intel® Turbo Boost ON, Intel® Hyper-Threading ON
Boot Storage: 2x Intel® SSD D3-S4510 Series
Storage Capacity: 2x Intel® Optane™ SSD DC P4800X Series. The latest available firmware installed
Networking: 2x Mellanox ConnectX-5 MCX516A-CCAT 100GbE Dual-Port NIC
Switch: 2x Mellanox SN2700 Spectrum 32 ports 100GbE Ethernet Switch

The diagram below illustrates servers’ interconnection.

Interconnection diagram.png
Interconnection diagram

NOTE: On every server, each NUMA node had 2x Intel® Optane™ SSD DC P4800X Series and 1x Mellanox 100 GbE NICs. Such configuration enabled to squeeze maximum performance out of each piece of hardware. Such connection is rather a recommendation than a strict requirement. To obtain similar performance, no tweaking NUMA node configuration is required, meaning that the default settings are OK.

Software Setup

Operating system. Windows Server 2019 Datacenter Evaluation version 1809, build 17763.404, was installed on all nodes with the latest updates available on May 1, 2019. With performance perspective in mind, the power plan was set to High Performance. All other settings, including the relevant side-channel mitigations (mitigations for Spectre v1 and Meltdown were applied), were left at default.

Windows Installation. Installing Hyper-V Role and configuring MPIO and Failover Cluster. To make deployment faster, we made an image of Windows Server 2019 with Hyper-V role installed and MPIO and Failover Cluster features enabled. Later, the image was deployed on 12 Supermicro servers.

software setup


Driver installation. Firmware update. Once Windows was installed, Windows Update was applied to each piece of hardware. Firmware updates for Intel NVMe SSDs were installed too.

StarWind Virtual SAN. Current production-ready StarWind VSAN version (8.0.0.12996) was installed on each server. Microsoft recommends creating at least one Cluster Shared Volume per server node. Therefore, for 12 servers, we created 12 volumes with ReFS.

ReFS showed convenient performance overtopping the results that we had using NTFS. Each volume was 340 GiB, so the total usable storage capacity was 4.08 TiB. Each volume used two-way mirror resiliency with allocation delimited to two servers. All other settings, like columns and interleave, were left to default. To accurately measure persistent storage IOPS, we disabled in-memory CSV read cache.

software setup


StarWind iSCSI Accelerator. We used built-in Microsoft iSCSI Initiator together with our own user-mode iSCSI initiator. Microsoft’s iSCSI initiator was developed in the “stone age”, when servers had one- or two-socket CPUs with a single core per socket. Having more powerful servers nowadays, the Initiator does not work as it should.

So, we developed iSCSI Accelerator as a filter driver between Microsoft iSCSI Initiator and the hardware presented over the network. Every time a new iSCSI session is created, it is assigned to a free CPU core. Therefore, performance of all CPU cores is used uniformly, and latency approaches zero. Distributing workloads in such way ensures smart compute resource utilization: no cores are overwhelmed while others idle.

CPU Core load diagram
CPU Core load diagram


StarWind iSCSI Accelerator (Load Balancer) was installed on each cluster node in order to balance virtualized workloads between all CPU cores in Hyper-V servers.

StarWind Loopback Accelerator. As a part of StarWind Virtual SAN, StarWind Loopback Accelerator was installed and configured in order to significantly decrease latency and CPU load for cases where Microsoft iSCSI Initiator was connected to StarWind iSCSI Target over the loopback interface. This piece of software allows configuring zero-copy memory in the loopback mode so that most of the TCP stack is bypassed.

NOTE: Due to the fast path provided by StarWind Loopback Accelerator, each iSCSI LUN had 2 loopback iSCSI sessions and 3 external partner iSCSI sessions. Least Queue Depth (LQD) MPIO policy was set. This policy maximizes network bandwidth utilization and automatically uses the active/optimized path with the smallest current outstanding queue.

software setup
iSCSI sessions interconnection diagram


Block iSCSI/iSER (RDMA) Like a cluster built of StarWind HyperConverged Appliances, today’s 12-node HCI environment featured Mellanox NICs and switches. In this study, StarWind Virtual SAN utilized iSER for backbone links to RDMA, delivering maximum possible performance.

NOTE: Windows Server 2019 doesn’t have iSER (RDMA) support yet. Lack of all-RDMA connections induces pressure on memory and limits performance. To eliminate the local Windows TCP/IP stack overhead, StarWind’s built-in userland iSER initiator was used for data and metadata synchronization and acknowledgment “guest” writes over iSER (RDMA).

Accelerating IO Performance
Accelerating IO Performance

As a result, the IO performance was accelerated with a combination of RDMA, DMA in loopback, and TCP connections.

NUMA node. Considering NUMA node configuration on each cluster node, the virtual disk was configured to replicate shared storage between two servers using a network adapter assigned to the same NUMA node. For example, on cluster node 3, the virtual disk was created on Intel Optane SSD which was located on NUMA node 1. So, for disk mirroring, Mellanox ConnectX-5 100 GbE NIC was also assigned to NUMA node 1.

NUMA node assignment diagram
NUMA node assignment diagram

CSV. For 12-node hyperconverged cluster, 12 Cluster Shared Volumes were created on top of 12 synchronously-mirrored StarWind virtual disks according to Microsoft recommendations.

software setup


Hyper-V VMs. To benchmark cluster performance, we populated it with 144 Windows Server 2019 Standard Gen 2 VMs – 12 VMs per-node. Each VM had 2 virtual processors and 2 GiB of memory, so 24 cores of each server were utilized.

software setup


NOTE: NUMA spanning was disabled to ensure that virtual machines always ran with optimal performance according to the known facts about NUMA spanning.

Benchmarking

In virtualization and hyperconverged infrastructures, it’s common to judge on performance based on the number of input/output (I/O) operations per second, or “IOPS” – essentially, the number of reads or writes that virtual machines can perform. A single VM can generate a huge number of either random or sequential reads/writes. In real production environments, there are tons of VMs which make the IOPS fully randomized. 4 kB block-aligned IO is a block size that Hyper-V virtual machines perform, so it was our benchmark of choice.

Hardware and software vendors often use this kind of pattern to measure the best performance in the worst circumstances.

In this set of articles, we not only performed the same tests as Microsoft but also benchmarked performance under other IO patterns that are common for production environments.

VM Fleet. We used the open-source VM Fleet tool available on GitHub. VM Fleet makes it easy to orchestrate DISKSPD, the popular Windows micro-benchmark tool, in hundreds or thousands of Hyper-V virtual machines at once.

According to Intel recommendations, the given number of threads was used for numerous storage IO tests. As a result, we got the highest storage performance in the saturation point under 32 outstanding IOs per thread (-o32). To disable the hardware and software caching, we specified unbuffered IO (-Sh). We specified -r for random workloads and -b4K for 4 kB block size. We varied the read/write proportion by the -w parameter.

Here’s how DISKSPD was started: .\diskspd.exe -b4 -t2 -o32 -w0 -Sh -L -r -d900 [...]

StarWind Command Center. Designed as an alternative to Windows Admin Center and bulky System Center Configuration Manager, StarWind Command Center consolidates sophisticated dashboards that provide all the important information about the state of each environment component on a single screen.

Being a single-pane-of-glass tool, StarWind Command Center enables to solve the whole range of tasks on managing and monitoring your IT infrastructure, applications, and services. As a part of StarWind ecosystem, StarWind Command Center allows managing a hypervisor (VMware vSphere, Microsoft Hyper-V, Red Hat KVM, etc.), integrates with Veeam Backup & Replication and public cloud infrastructure. On top of that, the solution incorporates StarWind ProActive Support that monitors the cluster 24/7, predicts failures, and reacts to them before things go south.

benchmarking-1
How StarWind Command Center can be integrated into HCI

For example, StarWind Command Center Storage Performance Dashboard features an interactive chart plotting cluster-wide aggregate IOPS measured at the CSV filesystem layer in Windows. More detailed reporting is available in the command-line output of DISKSPD and VM Fleet.

benchmarking-2

The other side of storage performance is the latency – how long an IO takes to complete. Many storage systems perform better under heavy queuing, which helps to maximize parallelism and busy time at every layer of the stack. But there’s a tradeoff: queuing increases latency. For example, if you can do 100 IOPS with sub-millisecond delay, you may also be able to achieve 200 IOPS if you can tolerate higher latency. Latency is good to watch out for: sometimes the largest IOPS benchmarking numbers are only possible with latency that would otherwise be unacceptable.

Cluster-wide aggregate IO latency, as measured at the same layer in Windows, is plotted on the HCI Dashboard too.

Results

Any storage system that provides fault tolerance necessarily makes distributed copies of writes, which must traverse the network and incurs backend write amplification. For this reason, the largest IOPS benchmark numbers are typically achieved only with reads, especially if the storage system has common-sense optimizations to read from the local copy whenever possible, which StarWind Virtual SAN does.

NOTE: To make it right, we show you VM Fleet results and StarWind Command Center results together with the videos of those tests.

Action 1: 4К random read --> .\Start-Sweep.ps1 -b 4 -t 2 -o 32 -w 0 -p r -d 900

With 100% reads, the cluster delivered 6,709,997 IOPS. This is 51% performance out of the theoretical value of 13,200,000 IOPS.

Where did we get the reference? Each node had two Intel Optane NVMe SSDs, each performed 550,000 IOPS. 2 disks * 12 nodes * 550,000 IOPS/drive = 13,200,000 IOPS.

benchmarking-3
benchmarking-4

Action 2: 4К random read/write 90/10 --> .\Start-Sweep.ps1 -b 4 -t 2 -o 32 -w 10 -p r -d 900

With 90% random reads and 10% writes, the cluster delivered 5,139,741 IOPS.

benchmarking-5
benchmarking-6

Action 3: 4К random read/write 70/30 --> .\Start-Sweep.ps1 -b 4 -t 2 -o 32 -w 30 -p r -d 900

With 70% random reads and 30% writes, the cluster delivered 3,434,870 IOPS.

NOTE: We had double writes since StarWind Virtual SAN synchronizes each virtual disk. This can be seen in StarWind Command Center.

benchmarking-7
benchmarking-8

Action 4: 2M sequential read --> .\Start-Sweep.ps1 -b 2048 -t 2 -o 16 -p s -w 0 -d 900

With 2M sequential reads, the cluster fully utilized the network (network throughput was 61.9GBps).

benchmarking-9
benchmarking-10

Action 5: 2M sequential write --> .\Start-Sweep.ps1 -b 2048 -t 2 -o 16 -p s -w 0 -d 900

With 2M sequential writes, the cluster fully utilized the network (network throughput was 50.81GBps)

benchmarking-11

Due to disk mirroring, StarWind Command Center Disk Throughput was accurate 50.81GBps (2х VM Fleet return).

benchmarking-12

Here are all the results for 12-server HCI cluster performance:

Run Parameters Result
Maximize IOPS, all-read 4 kB random, 100% read 6,709,997 IOPS1
Maximize IOPS, read/write 4 kB random, 90% read, 10% write 5,139,741 IOPS
Maximize IOPS, read/write 4 kB random, 70% read, 30% write 3,434,870 IOPS
Maximize throughput 2 MB sequential, 100% read 61.9GBps
Maximize throughput 2 MB sequential, 100% write 50.81GBps

1 - This is 51% performance out of theoretical 13,200,000 IOPS

Addition

So far, we saw that StarWind was not 100% loaded and could utilize more network bandwidth. We faced the expected high latency on 4K IO blocks.

benchmarking-13
Memory latency vs. Access range

To achieve the maximum performance, we installed 2 more Intel Optane NVMe SSDs and run 2 additional tests. Here are the results.

Action 6: 2M sequential read --> .\Start-Sweep.ps1 -b 2048 -t 2 -o 16 -p s -w 0 -d 900

With 2M sequential reads, the cluster fully utilized the network (networking throughput was 117GBps).

This was 104% performance (theoretical value is 112.5GBps).

benchmarking-14
benchmarking-15

Action 7: 2M sequential write --> .\Start-Sweep.ps1 -b 2048 -t 2 -o 16 -p s -w 0 -d 900

With 100% sequential 2M block writes, the cluster throughput was 100.29GBps. We got the same numbers for each node continuously synchronizing with its partner.

benchmarking-16
benchmarking-17

Here are all the results for 12-server HCI cluster performance:

Run Parameters Result
Maximize IOPS, all-read 4 kB random, 100% read 6,709,997 IOPS1
Maximize IOPS, read/write 4 kB random, 90% read, 10% write 5,139,741 IOPS
Maximize IOPS, read/write 4 kB random, 70% read, 30% write 3,434,870 IOPS
Maximize throughput 2 MB sequential, 100% read 61.9GBps
Maximize throughput 2 MB sequential, 100% write 50.81GBps
Maximize throughput +2NVMe SSD 2 MB sequential, 100% write 117 GB/s2
Maximize throughput +2NVMe SSD 2 MB sequential, 100% write 100.29GB/s

1-51% performance out of theoretical 13,200,000 IOPS
2-104% bandwidth out of theoretical 112.5GBps

Conclusion

This were results for the first stage of benchmarking of the 12-node production-ready StarWind HCA cluster wrapped in Supermicro SuperServer platform. Each server was powered with Intel® Xeon® Platinum 8268 processors, Intel® Optane™ SSD DC P4800X Series drives, and Mellanox ConnectX-5 100GbE NICs.

Our cache-less 12-node HCA cluster delivered 6.7 million IOPS, 51% out of the theoretical 13.2 million IOPS. It is still the breakthrough performance for the purely production configuration (only iSCSI was used for client access). The backbone was running over iSER, and no proprietary technologies were used. The similar performance can be obtained with any hypervisor using pure iSCSI initiators and StarWind Virtual SAN.

For the next benchmark stage, we are going to max out IO performance by configuring Intel Optane NVMe SSDs as caching devices, just as Intel recommends.

Request a Callback Gartner`s Niche Player