Free Webinar
June 6 | 11am PT / 2pm ET
Choose a winning backup strategy
with storing data either in private or in public cloud object storage!
Speaker: Ivan Ischenko, Solutions Engineer, StarWind

Configuring HA NFS File Server in Windows Server 2016

Published: December 25, 2017


This technical paper covers the Highly Available Shared Storage configuration for File Servers “for General use” in Windows Server 2016. It describes how to configure StarWind Virtual SAN, create Shared storage, and build the File Server for General use which allows keeping server application data on file shares over NFS protocol and makes files continuously accessible for end users. Providing VSAN reliability, this architecture is designed to ensure file share availability and accessibility for clustered nodes and VMs in the cluster.

StarWind Virtual SAN® is a hardware-less storage solution that creates a fault-tolerant and high-performing storage pool built for virtualization workloads by mirroring existing server’s storage and RAM between the participating storage cluster nodes. The mirrored storage resource, in this case, is treated just like local storage. StarWind Virtual SAN ensures the simple configuration of highly available shared storage for FS and delivers the excellent performance and advanced data protection features.

This guide is intended for experienced Windows Server users, or system administrators. It provides detailed instructions on how to configure HA Shared Storage for File Server in Windows Server 2016 with StarWind Virtual SAN as a storage provider.

A full set of up-to-date technical documentation can always be found here or by pressing the Help button in the StarWind Management Console.

For any technical inquiries, please, visit our online community,  Frequently Asked Questions page, or use the support form to contact our technical support department.

Pre-Configuring the Servers

The reference network diagram of the configuration described further in this guide is provided below.


NOTE: Additional network connections may be necessary, depending on the cluster setup and applications requirements. For any technical help with configuring the additional networks, please, do not hesitate to contact StarWind support department via online community forum, or via support form (depends on your support plan).

1. Make sure that you have a domain controller and you have added the servers we are configuring to the domain.

2. Install Failover Clustering, Multipath I/O features, and the Hyper-V role on both servers. That can be done through Server Manager (Add Roles and Features menu item).

3. Configure network interfaces on each node to make sure that Synchronization and iSCSI/StarWind heartbeat interfaces are in different subnets and connected according to the network diagram above.

In this document, 10.128.1.x subnet is used for iSCSI/StarWind heartbeat traffic, while

10.128.2.x subnet is used for the Synchronization traffic.

4. In order to allow iSCSI Initiators to discover all StarWind Virtual SAN interfaces, StarWind configuration file (StarWind.cfg) should be changed after stopping StarWind Service on the node where it will be edited.

Locate the StarWind Virtual SAN configuration file (the default path is: C:\Program Files\StarWind Software\StarWind\StarWind.cfg ) and open it with Wordpad as Administrator.

Find the <iScsiDiscoveryListInterfaces value=”0”/> string and change the value from 0 to 1 (should look as follows: <iScsiDiscoveryListInterfaces value=”1”/>). Save the changes and exit Wordpad. Once StarWind.cfg is changed and saved, StarWind Service can be started.

Installing File Server and NFS File Server Roles

1. Open Server Manager: Start -> Server Manager.

2. Select: Manage -> Add Roles and Features

3. Follow the Wizard’s steps to install the selected roles.


NOTE: Restart the server after installation is completed.

Enabling Multipath Support

4. On cluster nodes, open the MPIO manager: Start->Administrative Tools->MPIO;

5. Go to the Discover Multi-Paths tab

6. Tick the Add support for iSCSI devices checkbox and click Add


7. When prompted to restart the server, click Yes to proceed.

NOTE: Repeat the procedure on the second server.

Downloading, Installing, and Registering the Software

8. Download the StarWind setup executable file from our website by following this link:

NOTE: The setup file is the same for x86 and x64 systems, as well as for all Virtual SAN deployment scenarios.

9. Launch the downloaded setup file on the server where you wish to install StarWind Virtual SAN or one of its components. The setup wizard will appear:

10. Read and accept the License Agreement.


Click Next to continue.

11. Carefully read the information about new features and improvements. Red text indicates warnings for users who update existing software installations.


Click Next to continue.

12. Select the following components for the minimum setup and click Next to continue.


  • StarWind Virtual SAN Service. StarWind Service is the core of the software. It can create iSCSI targets as well as share virtual and physical devices. The service can be managed from StarWind Management Console on any Windows computer or VSA that is on the same network. Alternatively, the service can be managed from StarWind Web Console, deployed separately.
  • StarWind Management Console. The Management Console is the Graphic User

13. Interface (GUI) part of the software that controls and monitors all storage-related operations (e.g., allows users to create targets and devices on StarWind Virtual SAN servers connected to the network).

Specify the Start Menu folder.


Click Next to continue.

14. Enable the checkbox if you want to create a desktop icon and click Next to continue.


15. You will be asked to request a time-limited fully functional evaluation key, a FREE version key, or a fully-commercial license key sent to you with the purchase of StarWind Virtual SAN. Select the appropriate option.


Click Next to continue.

16. Click Browse to locate the license file.


Click Next to continue.

17. Verify the installation settings. Click Back to make any changes, or Install to continue.


18. Select the appropriate checkbox to launch the StarWind Management Console immediately after the setup wizard is closed.


Click Finish to close the wizard.

19. Repeat the installation steps on the partner node.

NOTE: To manage StarWind Virtual SAN installed on a Server Core OS edition, StarWind Management Console must be installed on a different computer running the GUI-enabled Windows edition.

Configuring Shared Storage

20. Launch StarWind Management Console by double-clicking the StarWind tray icon.

NOTE: StarWind Management Console cannot be installed on an operating system without a GUI. You can install it on Windows Desktop 7, 8, 8.1, 10, and Windows Server 2008 R2, 2012, 2012 R2, 2016 editions including the desktop versions of Windows.


If StarWind Service and Management Console are installed on the same server, the Management Console will automatically add the local StarWind node to the Console after the first launch. Then, Management Console automatically connects to it using the default credentials. To add remote StarWind servers to the Console, use the Add Server button on the control panel.

StarWind Management Console will ask you to specify the default storage pool on the server you’re connecting to for the first time. Please, configure the default storage pool to use one of the volumes you have prepared earlier. All the devices created through the Add Device wizard will be stored on it. Should you decide to use an alternative storage pool for your StarWind virtual disks, please use the Add Device (advanced) menu item.


Press the Yes button to configure the storage pool. Should you require to change the storage pool destination, press Choose path… and point the browser to the necessary disk.

NOTE: Each array used by StarWind Virtual SAN to store virtual disk images should meet the following requirements:

  • initialized as GPT;
  • have a single NTFS-formatted partition;
  • have a drive letter assigned.

On the steps below, you can find how to prepare an HA device for Witness drive. Other devices should be created in the same way.

21. Select either of two StarWind servers to start the device creation and configuration.

22. Press the Add Device (advanced) button on the toolbar.

23. Add Device Wizard will appear. Select Hard Disk Device and click Next.

24. Select the Virtual disk and click Next.

25. Specify the virtual disk name, location, and size and click Next.


26. Choose the type of StarWind device (Thick-provisioned, or LSFS device), specify virtual block size, and click Next.


27. Define the caching policy, specify the cache size, and click Next.

NOTE: It is recommended to assign 1 GB of L1 cache in Write-Back or Write-Through mode per 1 TB storage capacity if necessary.


28. Define the Flash Cache Parameters policy and size if necessary. Choose an SSD location in the wizard. Click Next to continue.

NOTE: The recommended size of the L2 cache is 10% of the initial StarWind device capacity.


29. Specify the target parameters.

Select the Target Name checkbox to enter a custom name of the target. Otherwise, the name will be generated automatically based on the target alias. Click Next to continue.


30. Click Create to add a new device and assign it to the target. Then, click Close to close the wizard.

31. Right-click the servers field and click the Add Server button. Add new StarWind Server which will be used as the partner HA node.


Press OK and Connect buttons to continue.

32. Right-click the device you have just created and select Replication Manager.


The Replication Manager window will appear. Press the Add Replica button.

33. Select Synchronous two-way replication and click Next to proceed.


34. Specify the partner server IP address.


Default StarWind management port is 3261. If you have configured a different port, make sure to change the Port Number value parameter accordingly. Click Next.

35. Select Heartbeat Failover Strategy and click Next.


36. Choose Create new Partner Device and click Next.


37. Specify the partner device location or the target name of the device if necessary.


Click Next.

38. Select Synchronization and Heartbeat channels for the HA device by clicking the Change network settings button.


39. Specify the interfaces for Synchronization and Heartbeat. Click OK. Then click Next.


NOTE: It is recommended configuring Heartbeat and iSCSI channels on the same interfaces to avoid the split-brain issue. If Synchronization and Heartbeat interfaces are located on the same network adapter, it is recommended to assign one more Heartbeat interface to a separate adapter.

40. Select Synchronize from existing Device as a partner device initialization mode and click Next.


41. Press the Create Replica button. Then click Close to close the wizard.

42. The added devices will appear in the StarWind Management Console.

Repeat the steps above to create other virtual disks if necessary.

Once all devices are created, the Management Console should look as follows.


Discovering Target Portals

In this section, we discuss how to discover Target Portals on each StarWind node.

43. Launch Microsoft iSCSI Initiator on the first StarWind node: Start > Administrative Tools > iSCSI Initiator or iscsicpl from the command line interface. The iSCSI Initiator Properties window will appear.

44. Navigate to the Discovery tab. Click the Discover Portal button. In Discover Target Portal dialog box, enter the local IP address –


45. Click the Advanced button, select Microsoft iSCSI Initiator as your Local adapter and keep Initiator IP as it is set by default. Press OK twice to complete the Target Portal discovery.


46. Click the Discover Portal button once again.

47. In the Discover Target Portal dialog box, enter the iSCSI IP address of the partner node and click the Advanced button.


48. Select Microsoft iSCSI Initiator as a Local adapter, and select the initiator IP address from the same subnet. Click OK twice to add the Target Portal.


49. Target portals are added on the local node.


50. Go through the same steps on the partner node.

51. All target portals are added to the partner node.


Connecting Targets

52. Launch Microsoft iSCSI Initiator on the first StarWind node and click on the Targets tab. The previously created targets should be listed in the Discovered Targets section.

NOTE: If the created targets are not listed, check the firewall settings of the StarWind Server and the list of networks served by the StarWind Server (go to StarWind Management Console -> Configuration -> Network).


53. Select a target for the Witness device, discovered from the local server and click Connect.

54. Enable checkboxes like in the image below and click Advanced.


Select Microsoft iSCSI Initiator in the Local adapter text field.

Select in the Target portal IP list.


Click OK twice to connect the target.

NOTE: It is recommended to connect Witness device only by loopback ( address. Do not connect the target to the Witness device from the partner StarWind node.

55. Select the target for the NFS-share device discovered from the local server and click Connect.

56. Enable checkboxes like in the image below and click Advanced.


57. Select Microsoft iSCSI Initiator in the Local adapter text field.

Select in the Target portal IP area.


Click OK twice to connect the target.

58. Select the target for the NFS-share device discovered from the partner StarWind node and click Connect.

59. Enable checkboxes, like it is shown in the image below, and click Advanced.


60. Select Microsoft iSCSI Initiator in the Local adapter text field.

61. In Target portal IP, select the IP address for the iSCSI channel on the partner StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target.


62. Repeat the above actions for all HA device targets. The result should look like in the picture below.


63. Repeat the steps described in this section on partner StarWind node, specifying corresponding IP addresses for the iSCSI channel. The result should look like in the screenshot below.


Multipath Configuration

NOTE: It is recommended configuring different MPIO policies depending on iSCSI channel throughput. For 1 Gbps iSCSI channel throughput, it is recommended to set Failover Only MPIO policy. For 10 Gbps iSCSI channel throughput, it is recommended to set Round Robin or Least queue depth MPIO policy.

64. With Failover Only MPIO policy, it is recommended to set localhost ( as the active path. Select a target located on the local server and click Devices.


65. The Devices dialog appears. Click MPIO.


66. Select Fail Over Only load balance policy, and then designate the local path as active.


67. You can verify that is the active path by selecting it from the list and clicking Details.


68. Repeat the same steps for each device on both nodes.

69. Round Robin or Least Queue Depth MPIO policy can be set in the Device Details window.


70. Initialize the disks and create partitions on them using the Disk Management snap-in. The disk devices are required to be initialized and formatted on both nodes in order to create the cluster.

NOTE: It is recommended to initialize the disks as GPT.

Creating a Cluster

NOTE: To avoid issues during cluster validation configuration, it is recommended to install the latest Microsoft updates on each node.

71. Open Server Manager. Select the Failover Cluster Manager item from the Tools menu.


72. Click the Create Cluster link in the Actions section of the Failover Cluster Manager.

73. Specify the servers which are to be added to the cluster.


Click Next to continue.

74. Validate the configuration by running the cluster validation tests: select “Yes…” and click Next to continue.


75. Specify the cluster name and IP address and click Next to continue.

NOTE: If the cluster servers get IP addresses over DHCP, the cluster also gets its IP address over DHCP. If the IP addresses are set statically, you will be prompted to set the cluster IP address manually.


76. Make sure that all the settings are correct. Click Previous to change the settings (if necessary):

NOTE: If checkbox “Add all eligible storage to the cluster” is enabled, the wizard will add all disks to the cluster automatically. The device with smallest storage volume will be assigned as Witness. It is recommended to uncheck it before you click Next and add cluster disks and the Witness drive manually.


77. The process of cluster creation starts. Upon completion, the system displays a summary with detailed information.


Click Finish to close the wizard.

Configuring Disk Witness in Quorum

78. Open Failover Cluster Manager.

79. Go to Cluster->Storage -> Disks.

80. Click Add Disk in the Actions panel, choose StarWind disks from the list, and click OK.

81. To configure the Witness drive, right-click Cluster->More Actions->Configure Cluster Quorum Settings, follow the wizard, and use the default quorum configuration.


Configuring a File Server role with a general use option

82. To configure the File Server role, open Failover Cluster Manager.

83. Right-click on the cluster name, then click Configure Role, and click Next to continue.


84. Select the File Server item from the list in High Availability Wizard and click Next to continue.


85. Select File Server for general use and click Next.


86. On the Client Access Point page, in the Name text field, type NETBIOS, the name that will be used to access a File Server and IP for it.


Click Next to continue.

87. Select the Cluster disk and click “Next”


88. Check whether the specified information is correct. Click Next to proceed, or Previous to change the settings.


89. Once installation finishes successfully, click Finish to close the Wizard.

90. The newly created role should look as shown in the screenshot below.


NOTE: If the role status is Failed and it is unable to Start, please, follow the next steps:

  • Open Active Directory Users and Computers;
  • Enable Advanced view if it is not enabled;
  • Edit the properties of the OU containing the cluster computer object (in our case – NFSFILESERVER-TEST);
  • Open the Security tab and click Advanced;
  • In the emerged window, press Add (the Permission Entry dialog box opens) and click Select a principal;
  • In the appeared window, click Object Types, select Computers, and click OK;
  • Enter the name of the cluster computer object (in our case – NFS-FILESERVER-TEST);


  • Go back to the Permission Entry dialog, scroll down, and select Create Computer Objects.


  • Click OK on all opened windows to confirm the changes.
  • Open Failover Cluster Manager, right-click the FS role, and click Start Role.

Sharing a Folder

To share a folder:

91. Open Failover Cluster Manager.

92. Expand the cluster and then click Roles.

93. Right-click the file server role and then press Add File Share.

94. On the Select the profile for this share page, click NFS Share – Quick and then click Next.


95. Select Available Storage to host the share. Click Next to proceed.


96. Type in the file share name and click Next.


97. Specify the “Authentication”. Click Next to proceed.


98. Click “Add” and specify “Share Permissions”.



99. Specify the access permissions for your file share.


100. Check whether specified settings are correct. Click Previous to make any changes or click Create to proceed.


101. Check a summary and click Close to close the Wizard.


To manage created shares:

102. Open Failover Cluster Manager.

103. Expand the cluster and click Roles.

104. Choose the file share role, select the Shares tab, right-click the created file share, and select Properties.



We have configured a two-node fault-tolerant cluster in Windows Server 2016 with StarWind Virtual SAN as a backbone for your HA shared storage. Now, you have a continuously available file share(s) available over NFS protocol.