Free Webinar
October 3 | 11am PT / 2pm ET
Disaster Recovery 101
Hyper-V and vSphere
Speaker: Oleg Pankevych, Solutions Engineer, StarWind

StarWind Virtual SAN® 3-node Compute and Storage Separated Scenario with Windows Server 2016

Introduction to StarWind Virtual SAN for Hyper-V

StarWind Virtual SAN® is a native Windows hypervisor-centric hardware-less VM storage solution. It creates a fully fault-tolerant and high performing storage pool built for the virtualization workloads by mirroring the existing server’s storage and RAM between the participating storage cluster nodes. The mirrored storage resources are then connected to all cluster nodes and treated just as a local storage by all hypervisors and clustered applications. High Availability (HA) is achieved by providing multipath access to all storage nodes. StarWind Virtual SAN® delivers supreme performance compared to any dedicated SAN solution since it runs locally on the hypervisor and all I/O is processed by local RAM, SSD cache, and disks. This way it never gets bottlenecked by storage fabric.

StarWind VSAN for Hyper-V System Requirements

Prior to installing StarWind Virtual SAN for Hyper-V, please make sure that the system meets the requirements, which are available via the following link: https://www.starwindsoftware.com/system-requirements

Please read StarWind Virtual SAN Best Practices document for additional information: https://www.starwindsoftware.com/resource-library/starwind-virtual-san-best-practices

Pre-Configuring the Windows Server 2016 Hosts

The network interconnection diagram is demonstrated below:

NOTE: Synchronization links can be connected either to the redundant switches or directly between the nodes (recommended). Also, additional network connections may be necessary, depending on the cluster setup and applications requirements. For any technical help with configuring the additional networks, please, do not hesitate to contact StarWind support department via online community forum, or via support form (depends on your support plan).

1. Make sure that the domain controller is configured, and the servers are added to the domain.

NOTE: Please follow the recommendation in KB article on how to place a DC in case of StarWind Virtual SAN usage.

2. Install Failover Clustering and Multipath I/O features, as well as the Hyper-V role, on all cluster nodes. This can be done through the Server Manager (Add Roles and Features) menu item.

3. Configure the network interfaces on each node to make sure that the Synchronization and iSCSI/StarWind heartbeat interfaces are in different subnets and connected according to the network diagram above. In this document, 172.16.30.x, 172.16.40.x, subnets are used for the iSCSI/StarWind heartbeat traffic, while 172.16.20.x,172.16.21.x, 172.16.22.x subnets are used for the Synchronization traffic.

4. In order to allow iSCSI Initiators to discover all StarWind Virtual SAN interfaces, the StarWind configuration file (StarWind.cfg) should be changed after installation and stopping the StarWind service on the node where it will be edited. Locate the StarWind Virtual SAN configuration file (the default path is “C:\Program Files\StarWind Software\StarWind\StarWind.cfg”) and open it via WordPad as Administrator. Find the <iScsiDiscoveryListInterfaces value=”0”/> string and change the value from 0 to 1 (should look as follows: <iScsiDiscoveryListInterfaces value=”1”/>). Save the changes and exit Wordpad. Once StarWind.cfg has been changed and saved, the StarWind service can be restarted.

Enabling Multipath Support

1. Open the MPIO Properties manager: Start -> Windows Administrative Tools -> MPIO. Alternatively, run the following PowerShell command :

2. In the Discover Multi-Paths tab, select the Add support for iSCSI devices checkbox and click Add.
MPIO Properties
3. When prompted to restart the server, click Yes to proceed.
4. Repeat the same procedure on the other servers.

VSAN from StarWind eliminates any need for physical shared storage just by mirroring internal flash and storage resources between hypervisor servers. Furthermore, the solution can be run on the off-the-shelf hardware. Such design allows VSAN from StarWind to not only achieve high performance and efficient hardware utilization but also reduce operational and capital expenses.
Find out more about ➡ VSAN from StarWind

Automated Storage Tiering Configuration

In case of using Automated Storage Tiering, the disks can be connected into OS directly in Pass-through mode or preconfigured into separate SSDs and HDDs RAID arrays, and then connected into OS.

NOTE: Simple Tier has no redundancy and in case of disk failure there is a risk of losing the data. It is recommended to configure resilient RAID arrays and use them as underlying storage for the Tier.

Automated Storage Tier creation

There are two ways to configure Automated Storage Tiering. It can be done via Server Manager and via the PowerShell console.

The first level of Storage Tier is Storage pools. At this level, separate physical disks are united into a single pool, providing the ability to flexibly expand the capacity and delegate administration.

The upper level is Storage Spaces. At this level, virtual disks are created using the available capacity of a storage pool. Storage Spaces feature the following characteristics: resiliency level, storage tiers, fixed provisioning, and precise administrative control.

1. Launch Server Manager->File and Storage Services->Volumes->Storage Pools. All disks available for Storage Pool are listed in Physical Disks. Click New Storage Pool.

Server Manager

NOTE:

is a PowerShell command that allows checking the disks available for Storage Pool.

is a PowerShell command that allows checking the parameters of physical disks.

2. Specify a Storage Pool name:

Storage Pool

3. Select the disks for Storage Pool and then press Next. In case of using Storage Tiers with both SSDs and HDDs, all these disks need to be added into the Storage Pool.

Phisical disk for the storage pool

4. Confirm the correct settings and click Create to create Storage Pool.

NOTE: In case of creating Storage Pool from RAID arrays, the MediaType parameter should be assigned manually. It can be done with the following PowerShell commands:

Assign SSD MediaType for the disk with size less than [ ]GB:

Assign HDD MediaType for the disk with size more than [ ]GB:

Additionally, the following commands can be used:

or

5. The next step is to create a virtual disk on the storage pool. It is possible to create multiple virtual disks that exist in the storage pool and then create multiple volumes that exist in each virtual disk. Create new virtual disk by right-clicking on the storage pool and selecting New Virtual Disk.

Server manager

6. For Automated Storage Tiering, both HDD- and SSD-based disks or RAID arrays should be in the storage pool to make use of Storage Tiers. In case of using Storage Tiers, Storage Layout can be only Simple and Mirror. Specify Virtual Disk Name and select Create storage tiers on this virtual disk.

Virtual Disk Name

NOTE: Simple Tier has no redundancy and in case of disk failure there is a risk of losing the data. It is recommended to configure resilient RAID arrays from disks and use them as an underlying storage for the Tier.

7. Select the storage layout type. Under the Simple layout, the data is striped across physical disks. This would be equivalent to a RAID-0 configuration. In case of using at least two disks, the Mirror configuration can be configured. The Mirror is equivalent to RAID-1. Once done, click next.

Storage Layout

8. Specify the provisioning type.

Fixed. This provision type means that virtual disk cannot exceed the actual storage pool capacity.

Thin. This provision type means that there is a possibility to create a volume with a size exceeding the storage pool capacity and then add physical disks later.

Choose fixed disk provisioning since this type is required by Storage Tiers. Click Next.

9. Specify the size of the Virtual Disk.

Size of the virtual disk

NOTE: At least 8 GB of free space on each Tier should be provisioned to allow Automated Storage rebuilding in case of the disk loss.

10. Confirm the settings and click Create to create Virtual Disk.

NOTE: In case of using both SSD and HDD disks or RAID arrays, automated Storage Tier consists of the so-called “hot” and “cold” Tiers. Automated Storage Tier elaborates a data map taking into account how often the certain data is used, thus defining how hot separate data blocks are. During the process of optimization that is launched automatically every day, the hot data, i.e. data that is used on the most frequent basis, is transferred to the fast SSD tier, with the data used less frequently, the so called cold data, being transferred to the slower HDD tier.

As the SSD tier based data gets updated only once a day, it is possible to manually optimize it with the help of the following CMD one-liner:

This command should be run on all cluster nodes, as it optimizes only those virtual disks the owner node for which is the one where the command is running.

For certain files, it can be optimal to permanently stay on the SSD tier. An example is a VHDX file that is accessed frequently and requires minimum latency and high performance. Such result can be achieved by pinning the file to the SSD tier.

The following recommendations should be taken into account before running the command:

  • The command should be run from the node owning the storage (Cluster Shared Volume) with the file stored on it.
  • Local path to the storage (Cluster Shared Volume) on the node should be used.

After a file is pinned, it will stay in the tier until the next optimization process triggered either automatically or manually.

To pin files to the SSD tier, run the following PowerShell command:

To unpin files from the SSD tier, run the following PowerShell command:

The below PowerShell command lists all files that are currently pinned:

11. Create a New Volume using New Volume Wizard:

Server manager

12. Select the server and disk and click Next.

13. Select the file system settings and click Next to proceed.

System Setting

NOTE: The steps described above can be performed with help of PowerShell commands. Also, with help of PowerShell, additional parameters can be configured for better performance:

Set 64K size of interleave: Interleave 65536.

Set LogicalSectorSizeDefault 4096 instead of default 512.

The cache size can be changed with the help of –WriteCacheSize [ ]GB parameter. It is possible to set cache size only via PowerShell commands for creating Automated Storage Tier.

Set SSD tier in two-way mirror: ResiliencySettingName Mirror -NumberOfDataCopies 2

The number of threads can be set with -NumberOfColumns parameter. The recommended number is the number of SSDs divided by 2.

The example of the PowerShell commands for Storage Pool and Virtual Disk with Tiered Storage creation is provided below:

The operations specified in this section should be performed on each server.

Installing File Server Roles

Please follow the steps below if file shares configuration is required

Scale-Out File Server (SOFS) for application data

1. Open Server Manager: Start -> Server Manager
2. Select: Manage -> Add Roles and Features
3. Follow the installation wizard steps to install the roles selected in the screenshot below:

StarWind - Select Server Role

4. Restart the server after installation is completed and perform steps above on the each server.

File Server for general use with SMB share

1. Open Server Manager: Start -> Server Manager
2. Select: Manage -> Add Roles and Features
3. Follow the installation wizard steps to install the roles selected in the screenshot below:

StarWind - Select Server Role

4. Restart the server after installation is completed and perform steps above on each server.

File Server for general use with NFS share

1. Open Server Manager: Start -> Server Manager
2. Select: Manage -> Add Roles and Features
3. Follow the installation wizard steps to install the roles selected in the screenshot below:

StarWind Select Role NFS

4. Restart the server after installation is completed and perform steps above on each server.

Installing StarWind VSAN for Hyper-V

1. Download the StarWind setup executable file from the StarWind website:
https://www.starwind.com/registration-starwind-virtual-san

2. Launch the downloaded setup file on the server to install StarWind Virtual SAN or one of its components. The Setup wizard will appear. Read and accept the License Agreement.
StarWind License agreement
3. Carefully read the information about the new features and improvements. Red text indicates warnings for users that are updating the existing software installations.
4. Select Browse to modify the installation path if necessary. Click on Next to continue.
select destination location
5. Select the following components for the minimum setup:

  • StarWind Virtual SAN Service. The StarWind Virtual SAN service is the “core” of the software. It can create iSCSI targets as well as share virtual and physical devices. The service can be managed from StarWind Management Console on any Windows computer that is on the same network. Alternatively, the service can be managed from StarWind Web Console deployed separately.
  • StarWind Management Console. Management Console is the Graphic User Interface (GUI) part of the software that controls and monitors all storage-related operations (e.g., allows users to create targets and devices on StarWind Virtual SAN servers connected to the network).

NOTE: To manage StarWind Virtual SAN installed on a Windows Server Core edition with no GUI, StarWind Management Console should be installed on a different computer running the GUI-enabled Windows edition.

6. Specify Start Menu Folder.
Select start menu folder
7. Enable the checkbox if a desktop icon needs to be created. Click on Next to continue.
8. When the license key prompt appears, choose the appropriate option:

  • Request time-limited fully functional evaluation key.
  • Request FREE version key.
  • Select the previously purchased commercial license key.

9. Click on the Browse button to locate the license file.
10. Review the licensing information.
11. Verify the installation settings. Click on Back to make any changes or Install to proceed with installation.
12. Enable the appropriate checkbox to launch StarWind Management Console right after the setup wizard is closed and click on Finish.
13. Repeat the installation steps on the partner node.

Creating StarWind HA Devices

1. Open Add Device (advanced) Wizard.
2. Select Hard Disk Device as the type of device to be created.
3. Select Virtual Disk.
4. Specify a virtual disk Name, Location, and Size.
5. Select the Thick provisioned disk type.
6. Define a caching policy and specify a cache size (in MB). Also, the maximum available cache size can be specified by selecting the appropriate checkbox. Optionally, define the L2 caching policy and cache size.
7. Specify Target Parameters. Select the Target Name checkbox to enter a custom target name. Otherwise, the name is generated automatically in accordance with the specified target alias.
8. Click Create to add a new device and attach it to the target.
9. Click Close to finish the device creation.
10. Right-click the recently created device and select Replication Manager from the shortcut menu.
11. Select the Add Replica button in the top menu.
21. The successfully added devices appear in the StarWind Management Console.

Select the Required Replication Mode

The replication can be configured in one of two modes:

Synchronous “Two-Way” Replication
Synchronous or active-active replication ensures real-time synchronization and load balancing of data between two or three cluster nodes. Such a configuration tolerates the failure of two out of three storage nodes and enables the creation of an effective business continuity plan. With synchronous mirroring, each write operation requires control confirmation from both storage nodes. It guarantees the reliability of data transfers but is demanding in bandwidth since mirroring will not work on high-latency networks.

Asynchronous “One-Way” Replication
Asynchronous Replication is used to copy data over a WAN to a separate location from the main storage system. With asynchronous replication, confirmation from each storage node is not required during the data transfer. Asynchronous replication does not guarantee data integrity in case of storage or network failure; hence, some data loss may occur, which makes asynchronous replication a better fit for backup and disaster recovery purposes where some data loss is acceptable. The Replication process can be scheduled in order to prevent the main storage system and network channels overloads.
Please select the required option:

Synchronous “Two-Way” replication

1. Select Synchronous “Two-Way” replication as a replication mode.

2. Specify a partner Host name or IP address and Port Number.

Selecting the Failover Strategy

StarWind provides 2 options for configuring a failover strategy:

Heartbeat
The Heartbeat failover strategy allows avoiding the “split-brain” scenario when the HA cluster nodes are unable to synchronize but continue to accept write commands from the initiators independently. It can occur when all synchronization and heartbeat channels disconnect simultaneously, and the partner nodes do not respond to the node’s requests. As a result, StarWind service assumes the partner nodes to be offline and continues operations on a single-node mode using data written to it.
If at least one heartbeat link is online, StarWind services can communicate with each other via this link. The device with the lowest priority will be marked as not synchronized and get subsequently blocked for the further read and write operations until the synchronization channel resumption. At the same time, the partner device on the synchronized node flushes data from the cache to the disk to preserve data integrity in case the node goes down unexpectedly. It is recommended to assign more independent heartbeat channels during the replica creation to improve system stability and avoid the “split-brain” issue.
With the heartbeat failover strategy, the storage cluster will continue working with only one StarWind node available.

Node Majority
The Node Majority failover strategy ensures the synchronization connection without any additional heartbeat links. The failure-handling process occurs when the node has detected the absence of the connection with the partner.
The main requirement for keeping the node operational is an active connection with more than half of the HA device’s nodes. Calculation of the available partners is based on their “votes”.
In case of a two-node HA storage, all nodes will be disconnected if there is a problem on the node itself, or in communication between them. Therefore, the Node Majority failover strategy requires the addition of the third Witness node which participates in the nodes count for the majority, but neither contains data on it nor is involved in processing clients’ requests. In case an HA device is replicated between 3 nodes, no Witness node is required.
With Node Majority failover strategy, failure of only one node can be tolerated. If two nodes fail, the third node will also become unavailable to clients’ requests.
Please select the required option:

Heartbeat

1. Select Failover Strategy.

2. Select Create new Partner Device and click Next.

3. Select a partner device Location.

4. Click Change Network Settings.

5. Specify the interfaces for Synchronization and Heartbeat Channels. Click OK and then click Next.


6. In Select Partner Device Initialization Mode, select Synchronize from existing Device and click Next.

7. Click Create Replica. Click Finish to close the wizard.

8. The successfully added device appears in StarWind Management Console.

9. Choose device, open Replication Manager and click Add replica again.

10. Select Synchronous “Two-Way” Replication as a replication mode. Click Next to proceed.

11. Specify a partner Host name or IP address and Port Number.

12. Select Failover Strategy.

13. Select Create new Partner Device and click Next.
14. Select a partner device Location.
15. Click Change Network Settings.

16. Specify the interfaces for Synchronization and Heartbeat Channels. Click OK and then click Next.

 

StarWind Specify Interfaces for Synchronization Channels

NOTE: It is not recommended to configure the Heartbeat and iSCSI channels on the same interfaces to avoid the split-brain issue. If the Synchronization and Heartbeat interfaces are located on the same network adapter, it is recommended to assign one more Heartbeat interface to a separate adapter.

17. In Select Partner Device Initialization Mode, select Synchronize from existing Device and click Next.
18. Click Create Replica. Click Finish to close the wizard.
The successfully added device appears in StarWind Management Console.
19. Follow the similar procedure for the creation of other virtual disks that will be used as storage repositories.

NOTE: To extend an Image File or a StarWind HA device to the required size, please check the article below:
https://knowledgebase.starwindsoftware.com/maintenance/how-to-extend-image-file-or-high-availability-device/

Node Majority

1. Select the Node Majority failover strategy and click Next.
Node Majority

2. Choose Create new Partner Device and click Next.

3. Specify the partner device Location and modify the target name if necessary. Click Next.

4. In Network Options for Replication, press the Change network settings button and select the synchronization channel for the HA device.

5. In Specify Interfaces for Synchronization Channels, select the checkboxes with the appropriate networks and click OK. Then click Next.

6. Select Synchronize from existing Device as the partner device initialization mode.

7. Press the Create Replica button and close the wizard.

8. The added devices will appear in StarWind Management Console.

9. Choose device, open Replication Manager and click Add replica again.

10. Select Synchronous “Two-Way” Replication as a replication mode. Click Next to proceed.


11. Specify a partner Host name or IP address and Port Number.

12. Select the Node Majority failover strategy and click Next.
Node Majority

13. Choose Create new Partner Device and click Next.

14. Specify the partner device Location and modify the target name if necessary. Click Next.

15. In Network Options for Replication, press the Change network settings button and select the synchronization channel for the HA device.

16. In Specify Interfaces for Synchronization Channels, select the checkboxes with the appropriate networks and click OK. Then click Next.

17. Select Synchronize from existing Device as the partner device initialization mode.

18. Press the Create Replica button and close the wizard.

19. The added devices will appear in StarWind Management Console.

Repeat the steps above to create other virtual disks if necessary.

NOTE: To extend an Image File or a StarWind HA device to the required size, please check the article below:
https://knowledgebase.starwindsoftware.com/maintenance/how-to-extend-image-file-or-high-availability-device/

Provisioning StarWind HA storage to Windows Server Hosts

1. Launch Microsoft iSCSI Initiator by executing the following command in the CMD window:

2. Navigate to the Discovery tab.

iscsi

3. Click the Discover Portal button. In the Discover Target Portal dialog, type in the iSCSI interface IP address of the first StarWind node that will be used to connect the StarWind provisioned targets. The steps below provide instructions how to discover targets within 172.16.30.X subnet. The same should be done for 172.16.40.X subnet
Click Advanced

Discover Target Portal
4. Select Microsoft iSCSI Initiator as the Local adapter, select the Initiator IP in the same subnet as the IP address of the partner node from the previous step. Confirm the actions to complete the Target Portal discovery.
Advanced Setting
5. Click the Discover Portal button. In the Discover Target Portal dialog, type in the iSCSI interface IP address of the second StarWind node that will be used to connect the StarWind provisioned targets. Click Advanced

Discover Target Portal

6. Select Microsoft iSCSI Initiator as the Local adapter, select the Initiator IP in the same subnet as the IP address of the partner node from the previous step. Confirm the actions to complete the Target Portal discovery.

Advanced Setting

 

7. Click the Discover Portal button. In the Discover Target Portal dialog, type in the iSCSI interface IP address of the third StarWind node that will be used to connect the StarWind provisioned targets. Click Advanced

Discover Target Portal

8. Select Microsoft iSCSI Initiator as the Local adapter, select the Initiator IP in the same subnet as the IP address of the partner node from the previous step. Confirm the actions to complete the Target Portal discovery.

Advanced Setting

9. Repeat the steps 1-8 on for the 172.16.40.X subnet.

10. Repeat the steps 1-9 on the partner node

11. All the target portals are added on the first Compute node.

iSCSI Initiator Property

Address Port Adapter IP Adress
172.16.30.10 3260 Microsoft iSCSI initiator 172.16.30.40
172.16.30.20 3260 Microsoft iSCSI initiator 172.16.30.40
172.16.30.30 3260 Microsoft iSCSI initiator 172.16.30.40
172.16.40.10 3260 Microsoft iSCSI initiator 172.16.30.40
172.16.40.20 3260 Microsoft iSCSI initiator 172.16.30.40
172.16.40.30 3260 Microsoft iSCSI initiator 172.16.30.40

12. All the target portals are added on the second Compute node.

iSCSI Propeties

Address Port Adapter IP Adress
172.16.30.10 3260 Microsoft iSCSI initiator 172.16.30.50
172.16.30.20 3260 Microsoft iSCSI initiator 172.16.30.50
172.16.30.30 3260 Microsoft iSCSI initiator 172.16.30.50
172.16.40.10 3260 Microsoft iSCSI initiator 172.16.30.50
172.16.40.20 3260 Microsoft iSCSI initiator 172.16.30.50
172.16.40.30 3260 Microsoft iSCSI initiator 172.16.30.50

Connecting Targets

1. Click the Targets tab. The previously created targets are listed in the Discovered Targets section.

NOTE: If the created targets are not listed, check the firewall settings of the StarWind Node as well as the list of networks served by the StarWind Node (go to StarWind Management Console -> Configuration -> Network). Alternatively, check the Access Rights tab on the corresponding StarWind VSAN server in StarWind Management Console for any restrictions.

iSCSI initiator properties Targets

2. Select the Witness target from the first StarWind Node and click Connect.
3. Enable checkboxes as shown in the image below. Click Advanced

Connect to target

4. Select Microsoft iSCSI Initiator in the Local adapter dropdown menu. In Initiator IP, select 172.16.30.40, in Target portal IP, select 172.16.30.10. Confirm the action

Advanced Settings

5. Select the Witness target from the first StarWind Node again and click Connect.
6. Enable checkboxes as shown in the image below. Click Advanced…

Connect To Target
7. Select Microsoft iSCSI Initiator in the Local adapter dropdown menu. In Initiator IP, select 172.16.40.40, in Target portal IP, select 172.16.40.10. Confirm the action

Advanced Setting

8.  Select the Witness target from the second StarWind Node and click Connect.
9.  Enable checkboxes as shown in the image below. Click Advanced…

Connect to Target

10.  Select Microsoft iSCSI Initiator in the Local adapter dropdown menu. In Initiator IP, select 172.16.30.40, in Target portal IP, select 172.16.30.20. Confirm the action

Advanced Setting

11.  Select the Witness target from the second StarWind Node once again and click Connect.
12.  Enable checkboxes as shown in the image below. Click Advanced…

Connect to Target
13.  Select Microsoft iSCSI Initiator in the Local adapter dropdown menu. In Initiator IP, select 172.16.40.40, in Target portal IP, select 172.16.40.20. Confirm the action.

Advanced Settings

14.  Select the Witness target from the third StarWind Node and click Connect.
15.  Enable checkboxes as shown in the image below. Click Advanced…

Connect To Target

16. Select Microsoft iSCSI Initiator in the Local adapter dropdown menu. In Initiator IP, select 172.16.30.40, in Target portal IP, select 172.16.30.30. Confirm the action.

Advanced Settings

17. Select the Witness target from the third StarWind Node one more and click Connect.

18.  Enable checkboxes as shown in the image below. Click Advanced…

Connect To Target

19. Select Microsoft iSCSI Initiator in the Local adapter dropdown menu. In Initiator IP, select 172.16.40.40, in Target portal IP, select 172.16.40.30. Confirm the action.

Advanced Settings

20. Repeat the steps 1-19 for all remaining HA device targets.

21. Repeat the steps 1-19 on the other Compute node, specifying the corresponding local and data channel IP addresses. The result should look like in the screenshot below.

iSCSI Properties

 

Configuring Multipath

NOTE: It is recommended to set the Round Robin or Least Queue Depth MPIO load balancing policy.

1.  Configure the MPIO policy for every target leaving it with the load balance policy of choice. Select the Target located on the local server and click Devices.
2.  In the Devices dialog, click MPIO

MPIO Policy

3. Select the appropriate load balancing policy.

Least Queue Depth


Configuring Disks to Servers

1. Open the Disk Management snap-in. The StarWind disks will appear as unallocated and offline.
disk management 2. Bring the disks online by right-clicking on them and selecting the Online menu option.
3. Select the CSV disk (check the disk size to be sure) and right-click on it to initialize.
4. By default, the system will offer to initialize all non-initialized disks. Use the Select Disks area to choose the disks. Select GPT (GUID Partition Style) for the partition style to be applied to the disks. Press OK to confirm.
initialize disk

5. Right-click on the selected disk and choose New Simple Volume.
6. In New Simple Volume Wizard, indicate the volume size. Click Next.
7. Assign a drive letter to the disk. Click Next.

assign drive letter

8. Select NTFS in the File System dropdown menu. Keep Allocation unit size as Default. Set the Volume Label of choice. Click Next.

NTFS volume9. Press Finish to complete.
10. Complete the steps 1-9 for the Witness disk. Do not assign any drive letter or drive path for it.

do not assign drive letter

11. On the partner node, open the Disk Management snap-in. All StarWind disks will appear offline. If the status is different from the one shown below, click Action->Refresh in the top menu to update the information about the disks.
12. Repeat step 2 to bring all the remaining StarWind disks online.

Creating a Failover Cluster in Windows Server 2016

NOTE: To avoid issues during the cluster validation configuration, it is recommended to install the latest Microsoft updates on each node.
1. Open Server Manager. Select the Failover Cluster Manager item from the Tools menu.

Server manager Dashboard

2. Click the Create Cluster link in the Actions section of Failover Cluster Manager.failover cluster3. Specify the servers to be added to the cluster. Click Next to continue.

Create Cluster Wizard

4. Validate the configuration by running the cluster validation tests: select Yes… and click Next to continue.validation warning5. Specify Cluster Name.
NOTE: If the cluster servers get IP addresses over DHCP, the cluster also gets its IP address over DHCP. If the IP addresses are set statically, set the cluster IP address manually.

Create Cluster Wizard

6. Make sure that all settings are correct. Click Previous to make any changes or Next to proceed.

Create Cluster Wizard

NOTE: If checkbox Add all eligible storage to the cluster is selected, the wizard will add all disks to the cluster automatically. The device with the smallest storage volume will be assigned as a Witness. It is recommended to uncheck this option before clicking Next and add cluster disks and the Witness drive manually.

7. The process of the cluster creation starts. Upon the completion, the system displays the summary with the detailed information. Click Finish to close the wizard.

Create Cluster Wizard

Adding Storage to the Cluster

1. In Failover Cluster Manager, navigate to Cluster -> Storage -> Disks. Click Add Disk in the Actions panel, choose StarWind disks from the list and confirm the selection.

Add Disks to Cluster

2. To configure the cluster witness disk, right-click on Cluster and proceed to More Actions -> Configure Cluster Quorum Settings.

Configure Cluster Quorum

3. Follow the wizard and choose the Select the quorum witness option. Click Next.

select quorum

4. Select Configure a disk witness. Click Next.

select disk witness

5. Select the Witness disk to be assigned as the cluster witness disk. Click Next and press Finish to complete the operation.

Configure Storage Witness

6. In Failover Cluster Manager, right-click the disk and select Add to Cluster Shared Volumes.

add to CSV

7. If renaming of the cluster shared volume is required, right-click on the disk and select Properties. Type the new name for the disk and click Apply followed by OK.

CSV properties

8. Perform the steps 6-7 for any other disk in Failover Cluster Manager. The resulting list of disks will look similar to the screenshot below.

Failover Cluster Manager
Configuring Cluster Network Preferences

1. In the Networks section of the Failover Cluster Manager, right-click on the network from the list.  If required, set its new name to identify the network by its subnet. Apply the change and press OK.

NOTE: Do not allow cluster network communication on either iSCSI or synchronization network.

Cluster Network Properties

2. Rename other networks as described above, if required.

Network for Live Migration

3. In the Actions tab, click Live Migration Settings. Uncheck the synchronization network only if the iSCSI network is 10+ Gbps. Apply the changes and click OK.

Networks

The cluster configuration is completed and it is ready for virtual machines deployment. Select Roles and in the Action tab, click Virtual Machines -> New Virtual Machine. Complete the wizard.

Configuring File Shares

Please follow the steps below if file shares should be configured on cluster nodes.

Configuring the Scale-Out File Server Role

1. To configure the Scale-Out File Server Role, open Failover Cluster Manager
2. Right-click the cluster name, then click Configure Role and click Next to continue

StarWind Configuring Server Roles

3. Select the File Server item from the list in High Availability Wizard and click Next to continue

StarWind Select Role

4. Select Scale-Out File Server for application data and click Next

Select Server Type

5. On the Client Access Point page, in the Name text field, type the NetBIOS name that will be used to access a Scale-Out File Server

StarWind File Server Name

Click Next to continue.

6. Check whether the specified information is correct. Click Next to continue or Previous to change the settings

File Server Confirmation

7. Once the installation is finished successfully, the Wizard should now look like the screenshot below.

Click Finish to close the Wizard.

File Server Summary

8. The newly created role should now look like the screenshot below.

File Server Confirmation

NOTE: If the role status is Failed and it is unable to Start, please, follow the next steps:

FIle Server Error

  • Open Active Directory Users and Computers;
  • Enable the Advanced view if it is not enabled;
  • Edit the properties of the OU containing the cluster computer object (in this case – Production);
  • Open the Security tab and click Advanced;
  • In the appeared window, press Add (the Permission Entry dialog box opens), click Select a principal;
  • In the appeared window, click Object Types, select Computers, and click OK;
  • Enter the name of the cluster computer object (in this case – Production);

Select User. Computer

  • Go back to Permission Entry dialog, scroll down, and select Create Computer Objects

Permission Entry for Computers

  • Click OK on all opened windows to confirm the changes.
  • Open Failover Cluster Manager, right-click SOFS role and click Start Role

Configuring File Share

To Add File Share:

1. Open Failover Cluster Manager.
2. Expand the cluster and then click Roles.
3. Right-click the file server role and then press Add File Share.
4. On the Select the profile for this share page, click SMB ShareApplications and then click Next.

SMB Share

5. Select a CSV to host the share. Click Next to proceed

Selecting server for share

6. Type in the file share name and click Next

Spacify Share name

7. Make sure that the Enable Continuous Availability box is checked. Click Next to proceed

Configuring share settings

8. Specify the access permissions for the file share.

Permissions to Control Access

NOTE:

  • For the Scale-Out File Server for Hyper-V, all Hyper-V computer accounts, the SYSTEM account, and all Hyper-V administrators must be provided with the full control on the share and file system;
  • For the Scale-Out File Server on Microsoft SQL Server, the SQL Server service account must be granted full control on the share and the file system.

9. Check whether specified settings are correct. Click Previous to make any changes or click Create to proceed.

Confirm Selections SOFS

10. Check the summary and click Close to close the Wizard.

SOFS results

To Manage Created File Shares:

1. Open Failover Cluster Manager.
2. Expand the cluster and click Roles.
3. Choose the file share role, select the Shares tab, right-click the created file share, and select Properties:

Failover Cluster manager Shares

Configuring the File Server for General Use Role

NOTE: To configure File Server for General Use, the cluster should have available storage

1. To configure the Scale-Out File Server role, open Failover Cluster Manager
2. Right-click on the cluster name, then click Configure Role and click Next to continue

StarWind Configuring Server Roles

3. Select the File Server item from the list in High Availability Wizard and click Next to continue

StarWind Select Role

4. Select File Server for general use and click Next

File Server Type

 

5. On the Client Access Point page, in the Name text field, type the NETBIOS name that will be used to access the File Server and IP for it

File Server Client Access Point

Click Next to continue

6. Select the Cluster disk and click Next

Select Storage

7. Check whether the specified information is correct. Click Next to proceed or Previous to change the settings

File Server General Use Confirmation

8. Once the installation has been finished successfully, the Wizard should now look like the screenshot below

Click Finish to close the Wizard.

File Server Summary

9. The newly created role should now look like the screenshot below

File Server

NOTE: If the role status is Failed and it is unable to Start, please, follow the next steps:

  • Open Active Directory Users and Computers;
  • Enable the Advanced view if it is not enabled;
  • Edit the properties of the OU containing the cluster computer object (in this case – Production);
  • Open the Security tab and click Advanced;
  • In the appeared window, press Add (the Permission Entry dialog box opens), click Select a principal;
  • In the appeared window, click Object Types, select Computers, and click OK;
  • Enter the name of the cluster computer object (in this case – Production);

Select User. Computer

  • Go back to Permission Entry dialog, scroll down, and select Create Computer Objects

Permission Entry for Computers

  • Click OK on all opened windows to confirm the changes.
  • Open Failover Cluster Manager, right-click File Share role and click Start Role

Configuring SMB File Share

To Add SMB File Share:

1. Open Failover Cluster Manager
2. Expand the cluster and then click Roles
3. Right-click the File Server role and then press Add File Share
4. On the Select the profile for this share page, click SMB ShareQuick and then click Next

Profile for SMB Share

5. Select available storage to host the share. Click Next to continue

Server and path for share

6. Type in the file share name and click Next

Share Name

7. Make sure that the Enable Continuous Availability box is checked. Click Next to continue

Configure share settings

8.Specify the access permissions for the file share

Permissions

9. Check whether specified settings are correct. Click Previous to make any changes or Next/Create to continue

Confirmation

10. Check the summary and click Close

View results

To manage created SMB File Shares:

11. Open Failover Cluster Manager
12. Expand the cluster and click Roles
13. Choose the File Share role, select the Shares tab, right-click the created file share, and select Properties

Failover cluster settings

 

Configuring NFS file share

To Add NFS File Share:

1. Open Failover Cluster Manager.
2. Expand the cluster and then click Roles.
3. Right-click the File Server role and then press Add File Share.
4. On the Select the profile for this share page, click NFS Share – Quick and then click Next

Profile for this share

5. Select available storage to host the share. Click Next to continue

Server and path for share

6. Type in the file share name and click Next

Share Name

7. Specify the Authentication. Click Next and confirm the message in pop-up window to continue

Autentication method

8. Click Add and specify Share Permissions

Share Permissions

Add Permission

9. Specify the access permissions for the file share

Permissions

 

10. Check whether specified settings are correct. Click Previous to make any changes or click Create to continue

Confirm Selections

11. Check a summary and click Close to close the Wizard

NFS Share results

To manage created NFS File Shares:

1. Open Failover Cluster Manager.
2. Expand the cluster and click Roles.
3. Choose the File Share role, select the Shares tab, right-click the created file share, and select Properties.

Failover cluster settings

Save your time finding all the answers to your questions in one place!
Have a question? Doubt something? Or just want to know an independent opinion?
StarWind Forum exists for you to “trust and verify” any issue already discussed and solved