Introduction to StarWind Virtual SAN for Hyper-V
StarWind Virtual SAN® is a native Windows hypervisor-centric hardware-less VM storage solution. It creates a fully fault-tolerant and high performing storage pool built for the virtualization workloads by mirroring the existing server’s storage and RAM between the participating storage cluster nodes. The mirrored storage resources are then connected to all cluster nodes and treated just as a local storage by all hypervisors and clustered applications. High Availability (HA) is achieved by providing multipath access to all storage nodes. StarWind Virtual SAN® delivers supreme performance compared to any dedicated SAN solution since it runs locally on the hypervisor and all I/O is processed by local RAM, SSD cache, and disks. This way it never gets bottlenecked by storage fabric.
The diagram below illustrates the network and storage configuration of the resulting solution described in the guide.
NOTE: Before creating Datacenter, the vCenter server should be deployed.
1. Connect to vCenter, select the Getting Started tab.
2. Click on Create Datacenter and enter Datacenter name.
1. Click the Datacenter Getting Started tab and click Create a cluster. Enter cluster Name and click OK.
2. Open the Cluster pane and click Add host.
3. Enter the ESXi Host name or IP address and enter the administrative account.
4. Turn on vSphere HA by clicking Manage -> Settings -> vSphere HA-> Edit.
5. In the Cluster Settings window, select the Turn on vSphere HA checkbox.
NOTE: Configure network interfaces on each node to make sure that the iSCSI interface is in a different subnet and connected physically according to the network diagram above. In this document, two links with 172.16.30.x and 172.16.40.x subnets per ESXi host are used for iSCSI traffic. All actions below should be taken for each ESXi server.
1. Create vSwitch for the iSCSI traffic via the first Ethernet Switch.
2. Create vSwitch for the iSCSI traffic via the second Ethernet Switch.
NOTE: iSCSI vSwitch requires creating Virtual Machine Port Group and VMKernel. VMKernel port requires assigning static IP address. It is recommended to set jumbo frames to 9000 on vSwitches and VMKernel ports for iSCSI traffic. Additionally, users can enable vMotion on VMKernel port.
3. Create VMKernel ports for iSCSI channels.
4. In VMkernel adapters pane, click Add host networking and Add Virtual Machine Port Group on the vSwitches for iSCSI traffic.
5. In Ready to complete, click Finish.
Preparing StarWind Servers
1. Install Windows Server 2016 (2012 R2) and StarWind VSAN on each host.
2. Make sure that server hardware used for StarWind Virtual SAN deployment satisfies the requirements listed below.
RAM: 4 GB (plus the size of the RAM cache if applicable);
CPUs: 1 socket with 2 GHz;
Hard disk 1: 100 GB for OS (recommended);
Hard disk 2: Depends on the storage volume to be used as shared storage.
Network adapter 1: Management
Network adapter 2: iSCSI1
Network adapter 3: iSCSI2
Network adapter 4: Sync1
Network adapter 5: Sync2
NOTE: It is recommended to set jumbo frames to 9000 for iSCSI and Synchronization traffic. StarWind host allows adding Active Directory Domain Services role if necessary, thus it can serve as a domain controller.
Installing StarWind VSAN for Hyper-V
1. Download the StarWind setup executable file from the StarWind website:
NOTE: The setup file is the same for x86 and x64 systems, as well as for all Virtual SAN deployment scenarios.
2. Launch the downloaded setup file on the server to install StarWind Virtual SAN or one of its components. The Setup wizard will appear. Read and accept the License Agreement.
3. Carefully read the information about the new features and improvements. Red text indicates warnings for users that are updating the existing software installations.
4. Select Browse to modify the installation path if necessary. Click Next to continue.
5. Select the following components for the minimum setup:
- StarWind Virtual SAN Service. StarWind service is the “core” of the software. It can create iSCSI targets as well as share virtual and physical devices. The service can be managed from StarWind Management Console on any Windows computer that is on the same network. Alternatively, the service can be managed from StarWind Web Console deployed separately.
- StarWind Management Console. Management Console is the Graphic User Interface (GUI) part of the software that controls and monitors all storage-related operations (e.g., allows users to create targets and devices on StarWind Virtual SAN servers connected to the network).
NOTE: To manage StarWind Virtual SAN installed on a Windows Server Core edition with no GUI, StarWind Management Console should be installed on a different computer running the GUI-enabled Windows edition.
6. Specify Start Menu Folder.
7. Enable the checkbox if a desktop icon needs to be created. Click Next to continue.
8. When the license key prompt appears, choose the appropriate option:
- Request time-limited fully functional evaluation key.
- Request FREE version key.
- Select the previously purchased commercial license key.
9. Click the Browse button to locate the license file.
10. Review the licensing information.
11. Verify the installation settings. Click Back to make any changes or Install to proceed with the installation.
12. Enable the appropriate checkbox to launch StarWind Management Console right after the setup wizard is closed and click Finish.
13. Repeat the installation steps on the partner node.
Provisioning Storage with StarWind VSAN
Creating StarWind HA devices
1. Open StarWind Management Console and click on the Add Device (advanced) button.
2. Select Hard disk device as the type of device to be created. Click Next to continue.
3. Select Virtual disk. Click Next to continue.
4. Specify virtual disk Name, Location, and Size.
NOTE: Image file device and HA device can be extended later according to the steps in KB article:
5. Specify virtual disk options.
NOTE: Sector size should be 512 bytes in case of using ESXi.
6. Define the RAM caching policy and specify the cache size (in the corresponding units) and click Next to continue.
NOTE: The recommended RAM cache size is 1 GB per 1 TB of storage.
7. Define the Flash caching policy and the cache size. Click Next to continue.
NOTE: The recommended Flash cache size is 10% of the Device size.
8. Specify Target Parameters and select the Allow multiple concurrent iSCSI connections checkbox to enable several clients to connect simultaneously to the target.
9. Click Create to add a new device and attach it to the target.
9. Click Create to add a new device and attach it to the target.
10. Click Finish to close the wizard.
11. Right-click on the recently created device and select Replication Manager from the shortcut menu.
12. Click Add replica.
Select the Required Replication Mode
The replication can be configured in one of two modes:
Synchronous “Two-Way” Replication
Synchronous or active-active replication ensures real-time synchronization and load balancing of data between two or three cluster nodes. Such a configuration tolerates the failure of two out of three storage nodes and enables the creation of an effective business continuity plan. With synchronous mirroring, each write operation requires control confirmation from both storage nodes. It guarantees the reliability of data transfers but is demanding in bandwidth since mirroring will not work on high-latency networks.
Asynchronous “One-Way” Replication
Asynchronous Replication is used to copy data over a WAN to a separate location from the main storage system. With asynchronous replication, confirmation from each storage node is not required during the data transfer. Asynchronous replication does not guarantee data integrity in case of storage or network failure; hence, some data loss may occur, which makes asynchronous replication a better fit for backup and disaster recovery purposes where some data loss is acceptable. The Replication process can be scheduled in order to prevent the main storage system and network channels overloads.
Please select the required option:
Synchronous “Two-Way” replication
Asynchronous "One-Way" Replication
NOTE: Asynchronous replication requires minimum 100 MbE network bandwidth or higher. The Asynchronous node uses the LSFS device by design. Please, make sure that the Asynchronous node meets the LSFS device requirements:
1. Select Asynchronous “One-Way” Replication.
2. Enter Host name or IP address of the Asynchronous node.
3. Choose the Create New Partner Device option.
4. Specify the partner device Location . Optionally, modify the target name by clicking the appropriate button.
5. Click Change Network Settings.
6. Specify the network for asynchronous replication between the nodes. Click OK and then click Next.
7. In Select Partner Device Initialization Mode, select Synchronize from existing Device and click Next.
8. Specify Scheduler Settings and click Next.
NOTE: The size of journal files and number of snapshots depends on the settings specified in this step.
9. Specify the path for journal files and click Next.
NOTE: By default, the journal files will be located on the node with the original device. However, it is highly recommended not to store journal files on the same drive where the original device is located. Additionally, the C:\ drive should not be used as the path for journal files to avoid any issues with Windows OS.
If the same drive where the StarWind device is located is selected, the warning message about possible performance issues will pop up. If there is no additional volume available for storing the journals, click I understand the potential problem. Use the selected path.
10. Press the Create Replica button.
11. Wait until StarWind service creates a device and click Close to complete the device creation.
Selecting the Failover Strategy
StarWind provides 2 options for configuring a failover strategy:
The Heartbeat failover strategy allows avoiding the “split-brain” scenario when the HA cluster nodes are unable to synchronize but continue to accept write commands from the initiators independently. It can occur when all synchronization and heartbeat channels disconnect simultaneously, and the partner nodes do not respond to the node’s requests. As a result, StarWind service assumes the partner nodes to be offline and continues operations on a single-node mode using data written to it.
If at least one heartbeat link is online, StarWind services can communicate with each other via this link. The device with the lowest priority will be marked as not synchronized and get subsequently blocked for the further read and write operations until the synchronization channel resumption. At the same time, the partner device on the synchronized node flushes data from the cache to the disk to preserve data integrity in case the node goes down unexpectedly. It is recommended to assign more independent heartbeat channels during the replica creation to improve system stability and avoid the “split-brain” issue.
With the heartbeat failover strategy, the storage cluster will continue working with only one StarWind node available.
The Node Majority failover strategy ensures the synchronization connection without any additional heartbeat links. The failure-handling process occurs when the node has detected the absence of the connection with the partner.
The main requirement for keeping the node operational is an active connection with more than half of the HA device’s nodes. Calculation of the available partners is based on their “votes”.
In case of a two-node HA storage, all nodes will be disconnected if there is a problem on the node itself, or in communication between them. Therefore, the Node Majority failover strategy requires the addition of the third Witness node which participates in the nodes count for the majority, but neither contains data on it nor is involved in processing clients’ requests. In case an HA device is replicated between 3 nodes, no Witness node is required.
With Node Majority failover strategy, failure of only one node can be tolerated. If two nodes fail, the third node will also become unavailable to clients’ requests.
Please select the required option:
1. Select Failover Strategy.
2. Select Create new Partner Device and click Next.
3. Select a partner device Location.
4. Click Change Network Settings.
5. Specify the interfaces for Synchronization and Heartbeat Channels. Click OK and then click Next.
6. In Select Partner Device Initialization Mode, select Synchronize from existing Device and click Next.
7. Click Create Replica. Click Finish to close the wizard.
The successfully added device appears in StarWind Management Console.
8. Follow the similar procedure for the creation of other virtual disks that will be used as storage repositories.
1. Select the Node Majority failover strategy and click Next.
2. Choose Create new Partner Device and click Next.
3. Specify the partner device Location and modify the target name if necessary. Click Next.
4. In Network Options for Replication, press the Change network settings button and select the synchronization channel for the HA device.
5. In Specify Interfaces for Synchronization Channels, select the checkboxes with the appropriate networks and click OK. Then click Next.
6. Select Synchronize from existing Device as the partner device initialization mode.
7. Press the Create Replica button and close the wizard.
8. The added devices will appear in StarWind Management Console.
Repeat the steps above to create other virtual disks if necessary.
Adding Witness Node
Witness node can be configured on a separate host or as a virtual machine in a cloud. It requires StarWind Virtual SAN service installed on it.
NOTE: Since the device created in this guide is replicated between 2 active nodes with the Node Majority failover strategy, a Witness node must be added to it.
1. Open StarWind Management Console, right-click on the Servers field and press the Add Server button. Add a new StarWind Server which will be used as the Witness node and click OK.
2. Right-click on the HA device with the configured Node Majority failover policy and select Replication Manager and press the Add Replica button.
3. Select Witness Node.
4. Specify the Witness node Host Name or IP address. The default Port Number is 3261.
5. In Partner Device Setup, specify the Witness device Location. Optionally, modify the target name by clicking the appropriate button.
6. In Network Options for Replication, select the synchronization channel with the Witness node by clicking the Change Network Settings button.
7. Specify the interface for Synchronization and Heartbeat and click OK.
8. Click Create Replica and then close the wizard.
9. Repeat the steps above to create other virtual disks if necessary.
Adding Discover Portals
1. To connect the previously created devices to the ESXi host, click on the Storage -> Adapters -> Configure iSCSI and choose the Enabled option to enable Software iSCSI storage adapter.
2. Click Add and select Add Software iSCSI adapter. Click OK.
3. The list of available storage adapters appears. Select iSCSI Software Adapter and open the Targets tab.
4. In the Configure iSCSI window, under Dynamic Targets, click on the Add Dynamic Target button to specify iSCSI interfaces. Add dynamic targets of both storage hosts.
5. Perform the same procedure for each StarWind server by clicking Add and specifying the server IP address.
6. Select Scan for new Storage Devices and Scan for new VMFS Volumes. Then click OK.
7. Repeat the steps 1-6 on the other ESXi node, specifying corresponding IP addresses for the iSCSI subnet.
1. Right-click on the host, select New Datastore and choose VMFS.
2. Specify Datastore name and select the device to add to the datastore.
3. Enter Datastore Size and click Next.
4. In Ready to complete, verify the settings and click Finish.
5. Check another host for a new datastore.
6. If the list of existing datastores does not include new datastore, click Rescan Storage...
7. Add another Datastore in the same way and select the second device for the second datastore.
8. Verify that the storages are connected to both hosts. Otherwise, rescan the storage adapter.
9. Path selection policy for datastores changes from Most Recently Used to Round Robin automatically. To check it, click the Configure button, choose the Storage Devices tab, select the device, and click the Edit Multipathing button.
10. Then Select Round Robin MPIO policy.
1. Click the Manage -> Settings->Security Profile->Services->SSH->Edit.
2. Select Start and stop with host in the Startup Policy drop-down list.
3. Connect to the host using SSH client (e.g. Putty).
4. Check the device list using the following commands:
esxcli storage nmp device list
For devices, adjust Round Robin size from 1000 to 1 using the following command:
esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=
NOTE. Paste StarWind device UID in the end of the command. Also, Rescan Script already contains this parameter and the system performs this action automatically.
5. Repeat the steps 1-4 on each host for each datastore.