StarWind Resource Library

StarWind Virtual SAN Compute and Storage Separated Windows Server 2012R2

Published: December 8, 2017

Introduction

StarWind Virtual SAN supports both, hyper-converged and compute- and storage-separated architectures. Running compute and storage layers separately makes it possible to scale compute and storage resources independently.

This technical paper provides a detailed step-by-step guidance on configuring a 2-node Hyper-V Failover cluster using StarWind Virtual SAN to turn storage resources of the separated servers into a fault-tolerant and fully redundant shared storage for Hyper-V environments.

The Failover Cluster configuration assumes that if one of the cluster nodes fails, the other nodes automatically take over resources , thus continuing serving the applications. Regarding this feature, the workflow remains uninterrupted and secured. Adding StarWind disks to CSVs provides efficient use of storage and simplifies its management, as well as enhances availability and increases resilience. Once it is done, you can start creating highly available virtual machines on them.

This guide is intended for experienced Windows system administrators and IT professionals who would like to configure a Hyper-V cluster using StarWind Virtual SAN to convert the clustered storage space into a fault-tolerant shared storage resource. It also highlights how to connect the StarWind HA devices to the Microsoft iSCSI initiator and configure the StarWind shared storage as the Cluster Shared Volumes.

A full set of up-to-date technical documentation can always be found here, or by pressing the Help button in the StarWind Management Console.

For any technical inquiries, please, visit our online community, Frequently Asked Questions page, or use the support form to contact our technical support department.

Pre-Configuring the Servers

The image below depicts a network architecture of the configuration described in this guide:

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-1

NOTE: Additional network connections may be required depending on the cluster setup and applications it`s running.

1. Make sure that cluster nodes are addedto the domain.

2. Install Failover Clustering and Multipath I/O features, as well as the Hyper-Vrole on cluster nodes. Thiscan be donethrough Server Manager (Add Roles and Features menu item).

3. Configure network interfaces on each node to make sure that Synchronization and iSCSI/StarWind heartbeat interfaces are in different subnetsand connected as itisshownin the network diagram above.
In this document,172.16.10.x and 172.16.20.x subnets are used for iSCSI/StarWind heartbeat traffic while 172.16.30.x and 172.16.40.xsubnetsareusedfor the Synchronization traffic.

4. In order toallow iSCSI Initiatorstodiscover all StarWind Virtual SANinterfaces,StarWind configuration file(StarWind.cfg)should be changedafterstoppingStarWind Serviceon the node where it isedited.
Locate the StarWind Virtual SANconfiguration file (the default path is: C:\Program Files\StarWind Software\StarWind\StarWind.cfg) andopen it with Wordpadas Administrator.
Find the string and change the value from 0 to 1(should look as follows: ).Save the changes and exit Wordpad.Once StarWind.cfghas been changed and saved, StarWind service canbestarted.

Enabling Multipath Support

5. On cluster nodes, open the MPIO manager: Start->Administrative Tools->MPIO;

6. Go to the Discover Multi-Paths tab;

7. Tick the Add support for iSCSI devices checkbox and click Add

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-2

8. When it is prompted to restart the server, click Yes to proceed.

NOTE: Repeat the procedure on the second cluster node.

Downloading, Installing, and Registering the Software

9. Download the StarWind setup executable file from our website by following the link below:
https://www.starwind.com/registration-starwind-virtual-san

NOTE: The setup file is the same for x86 and x64 systems, as well as for all Virtual SAN deployment scenarios.

10. Launch the downloaded setup file on the server where you wish to install StarWind Virtual SAN or one of its components. The setup wizard appears:

11. Read and accept the License Agreement.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-3

Click Next to continue.

12. Read the information about new features and improvements carefully.

The red text indicates warnings for users who are updating existing software installations.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-4

Click Next to continue.

13. Click Browse to modify the installation path if necessary.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-5

Click Next to continue.

14. Select the following components for the minimum setup:

StarWind Virtual SAN Service
StarWind Service is the core of the software. It can create iSCSI targets as well as share virtual and physical devices. The service can be managed from StarWind Management Console on any Windows computer or VSA that is on the network. Alternatively, the service can be managed from StarWind Web Console and deployed separately.

StarWind Management Console
The Management Console is the Graphic User Interface (GUI) part of the software that controls and monitors all storage-related operations (e.g., allows users to create targets and devices on StarWind Virtual SAN servers connected to the network).

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-6

Click Next to continue.

15. Specify the Start Menu folder.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-7

Click Next to continue.

16. Enable the checkbox if you want to create a desktop icon.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-8

Click Next to continue.

17. You will be prompted to request a time-limited fully functional evaluation key, or a FREE version key, or a fully commercial license key sent to you with the purchase of StarWind Virtual SAN. Select the appropriate option.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-9

Click Next to continue.

18. Click Browse to locate the license file.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-10

Click Next to continue.

19. Review the licensing information.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-11

Click Next to continue.

20. Verify the installation settings. Click Back to make any changes or Install to continue.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-12

21. Select the appropriate checkbox to launch the StarWind Management Console immediately after the setup wizard is closed.

Click Finish to close the wizard.

22. Repeat installation steps on the partner node.

Configuring Shared Storage

23. Launch the StarWind Management Console by double-clicking the StarWind tray icon.

NOTE: StarWind Management Console cannot be installed on a GUI-less OS. You can install the Console on any GUI-enabled Windows editions, including a desktop version of Windows.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-14

If StarWind Service and Management Console are installed on the same server, the Management Console automatically adds the local StarWind instance to the Console tree after the first launch. Then, the Management Console automatically connects to the StarWind Service using the default credentials. To add remote StarWind servers to the Console, use the Add Server button on the control panel.

24. StarWind Management Console asks you to specify a default storage pool on the server you connect to for the first time. Please, configure the storage pool to use one of the volumes you have prepared earlier. All the devices created through the Add Device wizard are stored in the configured storage pool. Should you decide to use an alternative storage path for your StarWind virtual disks, please use the Add Device (advanced) menu item.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-15

Press the Yes button to configure the storage pool. Should you require to change the storage pool destination, press Choose path… and point to the necessary disk in the browser.

NOTE: Each of the arrays that will be used by StarWind Virtual SAN to store virtual disk images should meet the following requirements:

  • initialized as GPT;
  • have a single NTFS-formatted partition;
  • have a drive letter assigned.

25. Select the StarWind server where you intend to create the device.

26. Press the Add Device (advanced) button on the toolbar.

27. Add Device Wizard appears. Select Hard Disk Device and click Next.

28. Select a Virtual disk and click Next.

29. Specify the virtual disk name, location, and size and click Next.
Below, you can find how to prepare an HA device for a Witness drive. Devices for Cluster Shared Volumes (CSV) should be created in the same way.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-16

30. Specify virtual disk options.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-17

Click Next.

31. Define the caching policy and specify the cache size and click Next.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-18

NOTE: It is recommended to put 1 GB of L1 cache in the Write-Back mode per 1 TB storage capacity.

32. Define the Flash Cache Parameters policy and size if necessary. Choose an SSD location in the wizard. Click Next to continue.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-19

NOTE: The recommended size of the L2 cache is 10% of the initial StarWind device size.

33. Specify target parameters.

Select the Target Name checkbox to enter a custom name of a target. Otherwise, the name is generated automatically based on the target alias. Click Next to continue.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-20

34. Click Create to add a new device and attach it to the target. Then, click Close to close the wizard.

35. Right-click the servers field and select Add Server. Add a new StarWind Server which will be used as the second HA node.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-21

Click OK and Connect button to continue.

36. Right-click the recently created device and select Replication Manager.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-22

The Replication Manager window appears. Press the Add Replica button.

37. Select Synchronous two-way replication and click Next to proceed.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-23

38. Specify the partner server IP address.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-24

The default StarWind management port is 3261. If you have configured a different port, please, type it in the Port number field. Click Next.

39. Select Heartbeat as the Failover Strategy and click Next.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-25

40. Choose Create new Partner Device and click Next.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-26

41. Specify the partner device location if necessary. You can also modify the target name of the device.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-27

Click Next.

42. You can specify the synchronization and heartbeat channels for the HA device on this screen. You can also modify the ALUA settings.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-28

Click Change network settings.

43. Specify the interfaces for synchronization and Heartbeat.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-29

Click OK. Then click Next.

NOTE: It is recommended configuring Heartbeat and iSCSI channels on the same interfaces to avoid the split-brain issue.

44. Select Synchronize from existing Device as a partner device initialization mode and click Next.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-30

45. Press the Create Replica button and click Close.

46. The added devices appear in StarWind Management Console.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-31

Repeat the steps above for the remaining virtual disks used as Cluster Shared Volumes.
Once all devices are created, the Management console should look as in the image below:

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-32

Discovering Target Portals

In this chapter, we will discover Target Portals from each StarWind node on each Cluster node.

47. Launch Microsoft iSCSI Initiator on Cluster Node 1:
Start > Administrative Tools > iSCSI Initiator or iscsicpl from the command line interface. The iSCSI Initiator properties window appears.

48. Navigate to the Discovery tab. Click the Discover Portal button.
In Discover Target Portal dialog enter iSCSI IP address of the first StarWind Node – 172.16.10.88

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-33

Click the Advanced button.

49. Select Microsoft iSCSI Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-34

50. Click the Discover Portal button once again.

51. In Discover Target Portal dialog, enter another iSCSI IP address of the first StarWind Node – 172.16.30.88 and click the Advanced button.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-35

52. Select Microsoft iSCSI Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-36

53. Target portals are added from the first StarWind Node.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-37

54. To Discover Targets Portals from the second StarWind node, click the Discover Portal button one more time, enter iSCSI IP address for the second StarWind Node – 172.16.10.99. Click the Advanced button.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-38

55. Select Microsoft iSCSI Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-39

56. Click the Discover Portal button one more time. In Discover Target Portal dialog, enter another iSCSI IP address of the second StarWind Node – 172.16.30.99. Click the Advanced button.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-40

57. Select Microsoft iSCSI Initiator as Local adapter and select Cluster Node 1 initiator IP address from the same subnet. Click OK twice to add the Portal.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-41

58. All target portals are successfully added to the Cluster Node 1.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-42

59. Perform the steps in this chapter on the Cluster Node 2. All target portals added to the Cluster Node 2 should look like in the picture below.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-43

Connecting Targets

60. Launch Microsoft iSCSI Initiator on Cluster Node 1 and click on the Targets tab. The previously created targets should be listed in the Discovered Targets section.

NOTE: If the created targets are not listed, check the firewall settings of the StarWind Server as well as the list of networks served by the StarWind Server (go to StarWind Management Console -> Configuration -> Network).

61. Select a target discovered from the first StarWind Node and click Connect.

 

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-44

62. Enable checkboxes like in the image below and click Advanced.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-45

63. Select Microsoft iSCSI Initiator in the Local adapter text field.

64. In the Target portal IP, select the IP address of the first StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-46

65. To connect the same target via another subnet, select it one more time and click Connect.

66. Enable checkboxes like in the image below and click Advanced.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-47

67. Select Microsoft iSCSI Initiator in the Local adapter text field.

68. In the Target portal IP select another IP address of the first StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-48

69. Select the partner target discovered from the second StarWind node and click Connect.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-49

70. Enable checkboxes like it is shown in the image below and click Advanced.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-50

71. Select Microsoft iSCSI Initiator in the Local adapter text field.

72. In Target portal IP select the IP address of the second StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-51

73. To connect the same target via another subnet, select it one more time and click Connect.

74. Enable checkboxes like in the image below and click Advanced.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-52

75. In the Target portal IP select another IP address of the second StarWind Node and Initiator IP address from the same subnet. Click OK twice to connect the target.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-53

76. Repeat the above actions for all HA device targets.

After that, repeat the steps described in this section on the Cluster node 2, specifying corresponding IP addresses. The result should look like in the picture below.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-54

77. Initialize the disks and create partitions on them using the Disk Management snap-in. To create the cluster, the disk devices must be initialized and formatted on both nodes.

NOTE: it is recommended to initialize the disks as GPT.

Creating a Cluster

NOTE: To avoid issues during cluster validation configuration, it is recommended to install the latest Microsoft updates on each node.

78. Open Server Manager. Select the Failover Cluster Manager item from the Tools menu.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-55

79. Click the Create Cluster item in the Actions section of the Failover Cluster Manager.

Specify servers to be added to the cluster.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-56

Click Next to continue.

80. Verify that your servers are suitable for building a cluster: select “Yes…” and click Next.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-57

81. Specify a cluster name.

NOTE: If the cluster servers get IP addresses over DHCP, the cluster also gets its IP address over DHCP. If the IP addresses are set statically, you need to set a cluster IP address manually.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-58

Click Next to continue.

82. Make sure that all of the settings are correct. Click Previous to make any changes. Click Next to continue.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-59

83. The process of cluster creation starts. Upon completion, the system displays a report with detailed information.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-60

Click Finish to close the wizard.

Adding Cluster Shared Volumes

Follow these steps to add Cluster Shared Volumes (CSV) that is necessary for working with Hyper-V virtual machines:

84. Open Failover Cluster Manager.

85. Go to Cluster->Storage -> Disks.

86. Click Add Disk in the Actions panel, choose the disks from the list and click OK.

87. To configure a Witness drive, right-click the Cluster->More Actions->Configure Cluster Quorum Settings, follow the wizard, and use a default quorum configuration.

88. Right-click the required disk and select Add to Cluster Shared Volumes.

starwind-virtual-san-compute-and-storage-separated-2-nodes-with-hyper-v-cluster-61

Once the disks are added to the Cluster Shared Volumes list, you can start creating highly available virtual machines on them.

NOTE: to avoid unnecessary CSV overhead, configure each CSV to be owned by one Cluster node. This node should also be the preferred owner of the VMs it runs.

Conclusion

The cluster increases availability of the services or applications on it. CSV feature simplifies the storage management by allowing multiple VMs to be accessed from a common shared disk.
Resilience is provided by creating multiple connections between StarWind nodes and the shared disk. Thus, if one of the nodes goes down, another one will take over the production operations.