This document fully describes system requirements for StarWind Virtual SAN. The technical paper is of use for virtualization admins and StarWind users who are planning to deploy StarWind VSAN and need technical details to proceed.
CPU: Intel Xeon E5620 (or higher) or equivalent AMD Opteron
RAM: 4GB Minimum
- Hardware RAID controller is highly recommended
- StarWind Virtual SAN supports Microsoft Storage Spaces
- Software RAID implementations are NOT supported
Depending on the storage requirements, the most appropriate RAID configuration could be selected. Our recommendations for the proper HA storage design are:
- RAID 10 for SAS or SATA hard drives
- RAID 1, 5, 6, 10 for SAS or SATA SSDs
Network: Minimum of 2 x 1GbE physical NICs
Network latency requirements:
Maximum synchronization network latency < 5 ms.
Network latency higher than 5 ms may lead to HA performance degradation.
Supported Operating Systems
Supported Microsoft Windows Server editions
- Windows Server Core 2008 R2
- Windows Server Core 2012
- Windows Server Core 2012 R2
- Microsoft Windows Server 2016
- Hyper-V Server 2008 R2
- Hyper-V Server 2012
- Hyper-V Server 2012 R2
- Hyper-V Server 2016
Supported Microsoft Windows Desktop editions for Management Console
- Windows 7
- Windows 8
- Windows 8.1
- Windows 10
System requirements if StarWind is deployed as the virtual machine:
Number of virtual processors: at least 4 virtual processors with 2 GHz reserved to ensure the proper functioning of VSA.
Network Settings: at least 2 NIC ports dedicated as separate vSwitches for StarWind synchronization and backup iSCSI traffic. Minimal recommended bandwidth is 1 GbE. In case of an intensive workload - 10 GbE or 40 GbE infrastructure. Supported vSphere version: 5.x
RAM. Primary storage system must have 128 MB of RAM additionally to what is used or planned to be used for primary storage workload handling. Disaster recovery location must be equipped with additional 1GB of RAM.
Bandwidth. The minimal required throughput is 1 Mbps.
Physical Storage Consideration. The performance of the disk which stores the replication journal on the primary node, benchmarked/measured using 8MB block size 100% sequential write must be greater or equal than the maximum write throughput of the production storage array. Journal disk free space reserve must be greater than the amount of data written to the production storage array within the time frame between the snapshots, plus additional 25%.