StarWind Resource Library

StarWind iSCSI SAN & NAS: StarWind Benchmarking Best Practice

Published: March 28, 2013


HA device performance depends directly on your network and disks. Therefore it is essential to ensure that the SAN is configured properly before you put it into use. Carry out the following tests to check the SAN configuration.


Test the performance of all network channels (iSCSI Data and Sync) with IPerf or NTttcp and make sure the channels operate at their advertised speeds.
It is necessary to benchmark network performance and to tune the network in order to get the maximum throughput for an iSCSI SAN environment. The tests should be performed before putting StarWind iSCSI SAN into a live production environment in order to get pure test results, and to avoid interference with the existing environment configuration.
StarWind Software has no strict requirements for which network benchmarking tools should be used, however the most commonly applied tools are NTttcp and IPerf (according to StarWind Software customers). The most popular and versatile disk benchmarking tool is IOMeter.
The diagrams below show optimal and recommended configurations for StarWind HA SAN (consisting of 2 and 3 nodes).


Note: The diagrams above show the SAN connections only. LAN connections, internal cluster communication and auxiliary connections have to be configured using separate network equipment. The cluster network should be configured according to your hypervisor guidelines and best practice guidance.

It is necessary to check:
• Each synchronization link between the StarWind servers
• Each link between StarWind 1 and Hypervisor 1
• Each link between StarWind 1 and Hypervisor 2
• Each link between StarWind 2 and Hypervisor 1
• Each link between StarWind 2 and Hypervisor 2

iSCSI data links on each StarWind node should have almost identical performance rates, otherwise HA SAN will run at the performance of the slowest node. Please refer also to the “Synchronization Channel Recommendations” chapter of StarWind High Availability Best Practices guide.

If test results show less than 80% of the link saturation (even if on only one link), then the network is not suitable for HA implementation/installation. If the network test results are low, then it is necessary to fix the link performance issue before implementing/installing HA. You have to carry out the send and receive tests on each server. Please refer to the appropriate vendor technical documentation when using network benchmarking tools.

IPerf examples:
• Server: iperf.exe -s –port 921 -w 512K
• Client: iperf.exe -c –port 921 –parallel 4 -w 256K -l 64K -t 30



Once you have properly configured the network and tested its performance, you can proceed to testing of iSCSI commands throughput. This test implies measuring the performance of an iSCSI-connected RAM Disk. RAM disk is based on the server’s local RAM; therefore, its performance exceeds the network speed. As a result a user can easily determine the peak performance of the iSCSI throughput for the particular link.

To test iSCSI commands throughput:

1. Create the StarWind RAM Disk device on one of HA storage nodes:
a) Open the Add Device Wizard
b) Click Virtual Hard Disk
c) Select the RAM disk device option button

Note: On the step of creation a target enable the Allow multiple concurrent iSCSI connections checkbox which is required for multipath throughput benchmarks.

2. Connect the RAM Disk device you have created to a client over a link you would like to benchmark (forexample 172.16.1.x).

3. Measure the performance of the device:
a) In the Disk manager on the client server, bring the new iSCSI disk online and initialize it
Note: If you undertake testing with ATTO Disk Benchmark, you also need to create a partition.
b) Use IOmeter from the client side. Select the following access specifications: 64kb, 100% Write, 100% Random.
c) Use IOmeter to test performance. Access specifications are: 16kb, 32kb, 64kb; 100% Read, 100% Write, 50/50 Read/Write, 100% Sequential, 100% Random.
4. Connect the RAM Disk via another network channel (172.16.2.x). Multipath will add two connected instances to the same disk.
5. Set up Round Robin MPIO policy.
6. Enable the multipathing role on a client OS to set up iSCSI Multipath.
7. As soon as you connect the disk via two channels, launch mpiocpl and enable support of iSCSI devices on the second tab.
8. Restart a host.

9. Test the RAM Disk using IOmeter or ATTO Disk Benchmark.
10. Perform the same tests on all storage nodes and hypervisors.

Note: If there is a significant difference between the results on different channels and RAM disk using the channels, you need to
determine which of the following is the cause:
• Firewalls
• Jumbo frames
• Antivirus



Step A. Test the local disk subsystem on every StarWind node (the storage where you intend to create HA virtual disks) using IOmeter or ATTO Disk Benchmark.

Step B. Test the performance of an Image File device.

To test the performance of an Image File device:

1. Open the Add Target Wizard and click Virtual Hard Disk.

2. Select the Image File device option button.

3. Click Create new virtual disk.

4. Define the location for the virtual disk to the previously measured storage from (Step A).

Note: Image file device used for testing has to be at least 2GB. You can use the write-back Device caching mode for benchmarking to increase the performance.

5. Connect Image File device to the client 1.

6. Measure performance in the same way as described in the iSCSI Commands Throughput Testing chapter. Use Image File device instead of RAM Disk device.

Note: It is highly recommended to perform the same tests on the rest storage nodes and hypervisors.

7. Compare the results of storage performance and Image File performance.

Note: The possible difference may be cause by Improper Stripe Size in RAIDs.

Step C. Test performance of an HA device. The performance is determined by your network and disks and will not exceed the their performance levels.

To test the performance of an HA device:

1. Create an HA device by locating virtual disks on the previously measured storages of each node (step A).
Please refer to the appropriate topic of an embedded Help file.

2. To test the HA device speed, mount the HA volume on the client via all channels (172.16.1.x, 172.16.2.x).

3. Set the Round Robin MPIO policy on the client side.

Note: MPIO policy directly affects the HA device performance, at that Round Robin provides better results than the Failover only (fixed path).

Note: Caching increases performance. It is recommended to use a minimum 512MB cache configured in the write-back mode. Write-back accelerates both reads and writes, while write-through mode increases only the read performance.

4. Test performance using IOmeter or ATTO Disk Benchmark.

Step D. Compare the results that you obtained for local storages (Step A) with the results received during Step B and Step C.



We recommend RAID1, RAID0 or RAID10 for implementing HA. RAID5 or RAID6 are not recommended due to low write performance.

The performance of a RAID array depends on the stripe size and disks used. There are no exact recommendations which stripe size should be used. In some cases a small stripe size, such as 4k or 8k, will result in better performance. In other cases 64k, 128k or even 256k values will be more suitable.

The decision has to be based on test results. You can initially set the size recommended by the vendor, then run tests.

Next perform tests with a bigger and smaller value (Step B). The three results will allow you to set the optimal stripe size.

Performance of the HA device depends on the performance of the applied RAID array.

Note: Software RAID solutions are not supported as base of an HA volume.



For troubleshooting in case of low performance:

• Antivirus can be the reason of low performance. Make sure you have added the StarWind folder and all virtual disk files to Antivirus exclusions.

• In most cases Jumbo Frames can be efficient, however sometimes they can cause performance degradation. It is recommended to test performance with Jumbo Frames of different sizes and without them. Jumbo Frames (if used) have to be enabled on all NICs, switches, and all applicable network equipment in the SAN. Check with your hardware vendor to be sure that your networking hardware supports Jumbo Frames.