StarWind Resource Library

StarWind Virtual SAN. Compute and Storage Separated 2-Node Cluster. Creating Scale-Out File Server with Hyper-V.

Published: October 14, 2014

Introduction

This document shows how to configure StarWind Virtual SAN® on 2 dedicated servers to provide fault tolerant shared storage to a client hypervisor cluster. A configuration with a dedicated SAN layer gives customers the ability to provide both block level and file level storage to the clients, resulting in a unified SAN/NAS solution which can be used for different applications and virtualization environments at the same time. It also allows users to configure StarWind Virtual SAN as a gateway to consolidate their heterogeneous storage environment into a single storage resource pool. Backend SANs can be a mix of different SANs from different vendors using different storage media like FC and iSCSI.

This guide is intended for experienced Windows system administrators and IT professionals who would like to configure a hyper-converged Scale-out File Server cluster using StarWind Virtual SAN to convert the local or iSCSI attached storage of the cluster nodes into a fault tolerant shared storage resource to be then presented to the client servers using SMB3 file share protocol.

A full set of up-to-date technical documentation can always be found here, or by pressing the Help button in the StarWind Management Console.

For any technical inquiries please visit our online community, Frequently Asked Questions page, or use the support form to contact our technical support department.

 

Pre-Configuring the Servers

Here is a reference network diagram of the configuration described in this guide.

Additional network connections may be necessary depending on cluster setup and applications its running.

1. This document assumes that you have a domain controller and you have added the servers to the domain. It also assumes that you have installed the Failover Clustering and Multipath I/O features, on all nodes. These actions can be performed using Server Manager (the Add Roles and Features menu item).

2.In order to allow StarWind Virtual SAN to use the Loopback accelerator driver and access the local copy of the data faster, you have to add a minor modification to the StarWind configuration file.

On each node where starwind locate the configuration and open it using Notepad.
The default path is: C:\Program Files\StarWind Software\StarWind\StarWind.cfg

3.Find the string «<!–<iScsiDiscoveryListInterfaces value=»1»/> –>» and uncomment it (should look as follows: <iScsiDiscoveryListInterfaces value=»1»/>). Save the changes and exit Notepad. Should there be any issues saving the document, launch Notepad with Administrator rights and then load the starwind.cfg file to do the modifications.

4.Restart the StarWind Service and repeat the same procedure on the second StarWind node.

 

Enabling Multipath Support

1.Open the MPIO manager: Start->Administrative Tools->MPIO.

2.Go to the Discover Multi-Paths tab.

3.Tick the Add support for iSCSI devices checkbox.

4.Click Add.

5.When prompted to restart the server, click Yes to proceed.

NOTE: Repeat procedures on all nodes.

 

Installing File Server Role

1.Open Server Manager: Start -> Server Manager.

2.Select: Manage -> Add Roles and Features

3.Follow the Wizard’s steps to install the selected roles.

NOTE: Repeat procedures on the first and second node.

 

Configuring Shared Storage

1.Launch the StarWind Management Console: double-click the StarWind tray icon.

NOTE: StarWind Management Console cannot be installed on an operating system without a GUI. You can install it on any of the GUI-enabled Windows Editions including the desktop versions of Windows.

If StarWind Service and Management Console are installed on the same server, the Management Console will automatically add the local StarWind instance to the console tree after the first launch. In future, the Management Console will automatically connect to it using the default credentials.

2.StarWind Management console will ask you to specify the default storage pool on the server you’re connecting to for the first time. Please configure the storage pool to use the one of the volumes you’ve prepared earlier. All the devices created through the Add Device wizard will be stored on it. Should you decide to use an alternative storage path for your StarWind virtual disks, please use the Add Device (advanced) menu item.

Press Yes button to configure the storage pool. Should you require to change the storage pool destination, press Choose path… and point the browser to the necessary disk.

NOTE: Each of the arrays to be used by StarWind Virtual SAN to store virtual disk images, has to meet the following requirements:

• Initialized as GPT
• Have a single NTFS-formatted partition
• Have a drive letter assigned

3.Select the StarWind server where you wish to create the device.

4.Press the Add Device (advanced) button on the toolbar.

5.Add Device Wizard will appear. Select Hard disk device and click Next.

6.Select Virtual disk and click Next.

7.Specify the virtual disk location and size.

8.Specify virtual disk options.

9.Define the caching policy and specify the cache size (in MB).

10.Define the Flash Cache Parameters policy and size if necessary

NOTE: It is strongly recommended to use SSD-based storage for “Flash Cache” caching.

11.Specify target parameters.

Select the Target Name checkbox to enter a custom name of a target.
Otherwise, the name will be generated automatically based on the target alias.

12. Click Create to add a new device and attach it to the target. Then click Close to close the wizard.

13. Right-click on the servers field and select Add Server. Add new StarWind Server, which will be used as second HA node.

14. Right-click on the device you just created and select Replication Manager.

Replication Manager Window will appear. Press the Add Replica button.

15. Select Synchronous two-way replication.

Click Next to proceed.

16. Specify the partner server IP address.
Default StarWind management port is 3261. If you have configured a different port, please type it in the Port number field.

17. Choose Create new Partner Device

18. Specify partner device location if necessary. You can also modify the target name of the device.

19. On this screen you can select the synchronization and heartbeat channels for the HA device. You can also modify the ALUA settings.

20. Specify the interfaces for synchronization and Heartbeat.

21. Select partner device initialization mode Do not Syncronize.
NOTE: Use this type of synchronization for adding partner to the device which doesn’t contain any data only.

22. Press the Create Replica button. Then click Close to close the wizard.

23. The added device will appear in the StarWind Management Console.

Repeat the steps 3 – 23 for the remaining virtual disk that will be used for File Shares.

Once all devices are created, the Management console should look as follows:

 

Discovering Target Portals

1.Launch Microsoft iSCSI Initiator: Start > Administrative Tools > iSCSI Initiator or iscsicpl from the command line interface. The iSCSI Initiator Properties window appears.

2.Navigate to the Discovery tab.

3.Click the Discover Portal button. Discover Target Portal dialog appears. Type in 127.0.0.1.

Click the Advanced button. Select Microsoft ISCSI Initiator as your Local adapter and select your Initiator IP (leave default for 127.0.0.1).

Click OK. Then click OK again to complete the Target Portal discovery.

4. Click the Discover Portal… button again.

5. Discover Target Portal dialog appears. Type in the first IP address of the partner node you will use to connect the secondary mirrors of the HA devices.

6. Select Microsoft ISCSI Initiator as your Local adapter, select the Initiator IP in the same subnet as the IP address on the partner server from the previous step.

Click OK. Then click OK again to complete the Target Portal discovery.

7. Click the Discover Portal… button again.

8. Discover Target Portal dialog appears. Type in the first IP address of the partner node you will use to connect the secondary mirrors of the HA devices.

9. Select Microsoft ISCSI Initiator as your Local adapter, select the Initiator IP in the same subnet as the IP address on the partner server from the previous step.

10. All target portals added on the first node.

11. Complete the same steps for the second node.

12. All target portals added on the second node.

 

Connecting Targets

1.Click the Targets tab. The previously created targets are listed in the Discovered Targets section.

NOTE: If the created targets are not listed, check the firewall settings of the StarWind Server as well as the list of networks served by the StarWind Server (go to StarWind Management Console -> Configuration -> Network).

2. Select a target of witness located on the local server and click Connect.

3. Enable checkbox as the image below.

4. Select Microsoft iSCSI Initiator in the Local adapter text field.

Select 127.0.0.1 in the Target portal IP.

5. Select the partner-target from other StarWind node and click Connect..

6. Enable checkbox as the image below.

7. Select Microsoft iSCSI Initiator in the Local adapter text field. In the Initiator IP field select the IP address. In the Target portal IP select the corresponding portal IP from the same subnet.

8. Select connected partner-target from other StarWind node and click Connect again.

 

9. Enable checkbox as the image below.

10. Select Microsoft iSCSI Initiator in the Local adapter text field. In the Initiator IP field select the IP address of the second ISCSI path. In the Target portal IP select the corresponding portal IP from the same subnet.

11. Now Witness disk is connected to the first node by the three pathes. Result should look like the image below. Repeat actions described in the steps above for all HA devices.-

12. The result should look like the screenshot below.

13. Repeat steps 1-12 of this section on the second StarWind node, specifying corresponding local and data channel IP addresses. The result should look like the screenshot below.

 

Multipath Configuration

1.Configure the MPIO policy for each device specifying localhost (127.0.0.1) as the active path. Select a target of witness located on the local server and click Devices.…

2. Devices dialog appears. Click MPIO.

3. Select Fail Over Only load balance policy and then designate the local path as active.

4. You can check that 127.0.0.1 is the active path by selecting it from the list and clicking Details

5. Repeat the same steps with the other targets on the first and second node.

6. Initialize the disks and create partitions on them using the computer management snap-in. It is required that the disk devices are visible on both nodes to create the cluster.

NOTE: it is recommended to initialize the disks as GPT.

 

Creating a Cluster

7. Open Server Manager. Select the Failover Cluster Manager item from the Tools menu.

8. Click the Create Cluster link in the Actions section of the Failover Cluster Manger.

9. Specify the servers to be added to the cluster.

10. Validate the configuration by passing the cluster validation tests: select “Yes…”

11. Specify a cluster name.

NOTE: If the cluster servers get IP addresses over DHCP, the cluster also gets its IP address over DHCP.

If the IP addresses are static, you have provide the cluster IP address manually.

12. Make sure that all of the settings are correct. Click Previous to make any changes.

NOTE: If checkbox “Add all eligible storage to the cluster” is selected, the wizard will try to add all StarWind devices to the cluster automatically. Smallest device will be assigned as Witness.

13. The process of cluster creation starts. After it is completed, the system displays a report with detailed information.

 

Configuring and Managing Scale-Out File Server

To make the Scale-Out File Server Highly Available you must have at least one accessible storage configured in the form of a CSV-volume.

1.Right-click the disk assigned to Available Storage (see the Assigned To column), and click Add to Cluster Shared Volumes.

2.The disk will be displayed as a CSV at the Failover Cluster Manager window as shown in the screenshot below.

 

Configuring Cluster Networks Settings

1.Go to Cluster->Networks.

2.Check Allow client to connect through this network for 172.16.1.0/24 and 172.16.2.0/24 Subnets. Uncheck the checkbox for 172.16.0.0/24.

 

Configuring the Scale-Out File Server Role

To configure the Scale-Out File Server role:

1.Open Cluster Manager.

2.Expand the Features item on the Console tree and select Configure Role it starts High Availability Wizard. Click Next to continue.

3.Select the File Server item from the list in High Availability Wizard.

4. Select File Server for scale-out application data.

5. On the Client Access Point page, in the Name text field type a NETBIOS name that will be used to access a Scale-Out File Server.

6. Check the provided settings.

7. Review the information on the Summary page.

8. The Failover Cluster Manager window should look as on the screenshot below.

 

Creating A File Share On a Cluster Shared Volume

1.Right-click on the file server role and select Add File Share.

NOTE: if you see Client Access Point alert.

Open command promt and run the command “ipconfig /flushdns”

2. Select SMB Share – Applications from the list of profiles. Click Next to proceed.

3. Select a CSV to host the share.

4. Enter a share name and verify the path to the share.

5. Ensure the Enable Continuous Availability checkbox is selected.

6. Click Customize permissions and grant the following permissions:

7.  On the Permissions page, click Customize Permissions.

NOTE:
• If you are using Scale-Out File Server for Hyper-V, all Hyper-V computer accounts, the SYSTEM account, and all Hyper-V administrators must be granted full control on the share and file system.
• If you are using Scale-Out File Server on Microsoft SQL Server, the SQL Server service account must be granted full control on the share the file system.

8. Click Add, click Select a Principal, and then click Object Types.

9. In Object Types, click to select Computers, and click OK.

10. Enter the name of the first client cluster node S2N7, and click OK.

In Permissions Entry, select Full Control, and click OK.

Repeat the 8-10 steps for the second client cluster node S2N8. Click Apply when finished, then OK.. On the Permissions page, click Next.

11. Review the settings provided.

12. Add the other 2 shares following steps 1 – 12. These will be used as CSVs in client cluster.

13. The Failover Cluster Manager should look as on the screenshot below. Right click on the witness share.

14. Uncheck the Enable continuous availability checkbox.

Press Apply.

 

Creating Client Cluster

This chapter shows how to create a failover cluster using the SMB shares we have configured in the previous chapter.

1.Open the create cluster wizard.

2. Select the nodes to participate in the cluster.

3. Pass the cluster validation tests to ensure the configuration is suitable for clustering.

4. Select the cluster name and cluster IP address.

5. Verify the settings provided.

6. After creating the cluster you will see a warning like on the screenshot below.

 

Configuring Client Cluster

1.Right click on cluster you created and select Configure Cluster Quorum Settings…

2. Configure Cluster Quorum Wizard appears.

3. Chose Select the quorum witness.

4. Chose Configure a file share witness.

5. Enter the path to the witness file share.

6. Verify settings provided.

7. You will see a report once the quorum settings are changed.

8. You should now see the file share witness in Cluster Core Resources.

NOTE: When you start creating virtual machines, specify the CSV file share as the virtual machine location.