Free Webinar
May 2 | 11am PT / 2pm ET
Are you interested in storing backups in the simplest and cost-effective way?
Please meet, Wasabi!
Speaker: Oleg Pankevych, Pre-Sales Engineer, StarWind

StarWind Virtual SAN®
Scale-Out Existing Deployments
for VMware vSphere

INTRODUCTION

The idea behind scale-out is to grow both storage and compute power by adding additional nodes instead of adding disks, CPUs, NICs or RAM to individual systems. Increasing the number of nodes increases reliability by eliminating single points of failure. It also gives customers a more flexible computing environment by treating all servers as a unified compute and storage resource pool. Adding nodes with the same hardware configuration isn’t necessary. Customers can add performance-tuned or capacity-tuned nodes depending on their IT environment requirements.

VMware requires shared storage to guarantee data safety, allow virtual machines migration, enable continuous application availability and eliminate any single point of failure within an IT environment. VMware users can choose between two options when selecting the shared storage:

Hyperconverged solutions: allows sharing the same hardware resources for the application (i.e. hypervisor, database) and the shared storage, thus decreasing the TCO and achieving the outstanding performance results.

Compute and Storage separated solutions: keeps the compute and storage layers separate from each other, thus making the maintenance easier, increasing the hardware usage efficiency and allows building the system accurately for solving the task.

This guide is intended for experienced StarWind users, VMware and Windows system administrators and IT professionals who would like to add the Scale-Out node to the StarWind Virtual SAN® cluster. It provides a step-by-step guidance on scaling out the hyperconverged 2-node StarWind Virtual SAN that converts the storage resources of the separated general-purpose servers into a fault-tolerant shared storage resource for ESXi.

A full set of up-to-date technical documentation can always be found here, or by pressing the Help button in StarWind Management Console. Also, you can invoke Technical Support directly from StarWind VSAN Help.

For any technical inquiries please visit our online community, Frequently Asked Questions page, or use the support form to contact our technical support department.

The network interconnections diagrams are demonstrated bellow.

Hyperconverged scenario diagram

Compute and Storage Separated scenario diagram

Preparing Cluster

The following steps should be done for Hyperconverged scenario.

For the Compute and Storage separated scenario, please move to the Replacing Partner for (DS2) part.

1. Open the cluster tab, click Add host and enter Host name or IP address of the ESXi host.

2. Click Cluster -> Configure -> Edit. In vSphere Availability area, tick the Turn ON vSphere HA checkbox.

Configuring Networks

Configure network interfaces on each node to make sure that Synchronization and iSCSI/StarWind Heartbeat interfaces are in different subnets and connected physically according to the network diagram for hyperconverged scenario.

3. Create vSwitch for the iSCSI traffic via the first Ethernet.

4. Create vSwitch for the iSCSI traffic via the second Ethernet.

NOTE: Virtual Machine Port Group and VMKernel should be created for iSCSI vSwitch. Static IP address should be assigned to VMKernel port.

5. Create VMKernel ports one per each iSCSI channel.

6. In VMkernel adapters pane,click Add Networking.

7. Select Virtual Machine Port Group for a Standard Switch for iSCSI traffic.

8. Repeat the steps 3-4 to create 2 vSwitches for the SYNC traffic without VMkernel.

Preparing StarWind VM

9. Create VM (SW3) on this host. Install Windows Server 2016 (or 2012 R2) and install StarWind Virtual SAN.

Please make sure that the VMs meet the following parameters:

RAM: at least 4 GB (plus the size of the RAM cache if it is planned to be used);

CPUs: at least 1 socket with 2 GHz;

Hard disk 1: 100 GB for OS (recommended);

Hard disk 2: Depends on the storage volume to be used as shared storage.

Network adapter 1: Management
Network adapter 2: iSCSI1

Network adapter 3: iSCSI2

Network adapter 4: Sync1

Network adapter 5: Sync2

Configuring Automatic Storage Rescan

For each ESXi host, configure an automatic storage rescan.

10. Log in to StarWind VM and install vSphere PowerCLI on each StarWind Virtual Machine by adding the PowerShell module (Internet connectivity is required). To do so, run the following command in PowerShell:

NOTE: In case of using Windows Server 2012 R2, online installation of PowerCLI requires Windows Management Framework 5.1 or later version available on VMs. Windows Management Framework 5.1 can be downloaded by following the link: https://go.microsoft.com/fwlink/?linkid=839516

11. Open PowerShell and change the Execution Policy to Unrestricted by running the following command:

12. Create the PowerShell script which will perform an HBA rescan on the hypervisor host.

In the appropriate lines, specify the IP address and login credentials of the ESXi host on which the current StarWind VM is stored and running:

$ESXiHost1 = “IP address”

$ESXiUser = “Login”

$ESXiPassword = “Password”

Save the script as rescan_script.ps1 to the root of the C:\ drive of the VM.

13. Perform the configuration steps above on the partner nodes.

14. Go to Control Panel -> Administrative Tools -> Task Scheduler -> Create Basic Task and specify the task name:

15. Select When a specific event is logged option and click Next.

16. Select Application in the Log drop-down menu, type StarWindService as the event source and 788 as event ID. Click Next.

17. Select the Start a Program option as the action the task should perform and click Next.

18. Type powershell.exe in the Program/script field. In the Add arguments field, type:

“ -ExecutionPolicy Bypass -NoLogo -NonInteractive -NoProfile -WindowStyle Hidden -File C:\rescan_script.ps1 ”

19. Click Finish to exit the wizard.

20. Configure the task to run with the highest privileges by enabling the corresponding checkbox at the bottom of the window. Make sure that Run whether user is logged on or not option is selected.

21. Switch to the Triggers tab. Verify that the trigger on event 788 is set up correctly.

22. Click New and add other triggers for Event ID 782, 257, 773, and 817.

23. All the added triggers should look like in the picture below.

24. Switch to the Actions tab and verify the task parameters.

25. Press OK and type in the credentials for the user whose rights will be used to execute the command.

Replacing Partner for (DS2)

26. Add the third StarWind node (SW3).

27. Open Replication Manager for DS2 on the second StarWind node.

28. Click Remove Replica.

29. Click Add Replica.

30. Select Synchronous “Two-Way “Replication and click Next.

31. Enter Host Name or IP Address of the third StarWind node.

32. Select Create new Partner device.

33. Click Change Network Settings. Specify the interfaces for Synchronization and Heartbeat channels. Click OK. Then click Next.

34. Click OK to return to Network Option for Synchronization Replication. Click Next

35. Click Create Replica.

36. After creation, click Finish to close Replication Wizard. The result should look as shown on the screenshot below.

Creating Virtual Disk (DS3)

37. Open Add Device wizard by right-clicking the StarWind server and selecting Add Device (advanced) from the shortcut menu or by clicking the Add Device (advanced) button on the toolbar.

38. Once Add Device wizard appears, follow the instructions to complete the creation of a new disk.

39. Select Hard Disk Device as the type of a device to be created. Click Next to continue.

40. Select Virtual Disk. Click Next to continue.

41. Specify virtual disk location and size.

42. Specify Virtual Disk Options and click Next to continue.

NOTE: Sector size should be 512 bytes when using ESXi.

43. Define the RAM caching policy and specify the cache size in the corresponding units.

NOTE: It is recommended to assign 1 GB of L1 cache in Write-Back or Write-Through mode per 1 TB of storage capacity. The cache size should correspond to the storage working set of the servers.

44. Define the Flash caching policy and the cache size. Click Next to continue.

NOTE: The recommended size of the L2 cache is 10% of the initial StarWind device capacity.

45. Specify Target Parameters. Select the Target Name checkbox to enter a custom name of the target if required. Otherwise, the name will be generated automatically in accordance with the specified target alias. Click Next to continue.

46. Click Create to add a new device and attach it to the target and Finish to close the wizard.

47. Right-click on the recently created device and select Replication Manager from the shortcut menu.

48. Click Add replica and select Synchronous “Two-Way Replication”.

49. Specify partner Host Name or IP address and Port Number.

50. Select Create new Partner Device and click Next.

51. Click Change Network Settings.

52. Click Create Replica.

53. The added devices are seen in the StarWind Console.

Configuring ESXI Cluster and Adding Discover Portals

54. To connect the previously created devices to the ESXi host, log in to vCenter, click Storage Adapters and +, and select the Enabled option to enable Software iSCSI storage adapter.

55. Under Dynamic Discovery click the Add… button to add iSCSI servers.

56. On the same page, click Targets tab and Add to add dynamic targets.

57. Perform the same actions for each StarWind server by clicking Add and specifying the server IP address. All IPs added are shown as on the screenshot below.

58. Rescan all storage adapters. In the Rescan Storage dialog, click OK.

59. Now, the previously created StarWind devices are visible.

Creating datastore (DS3)

60. Right-click on Datacenter ->Storage ->New Datastore. Choose VMFS.

61. Specify Datastore name, select the previously discovered StarWind device, and click Next.

62. Select VMFS 6 file system. Click Next.

63. Enter Datastore Size. Click Next.

64. Verify the settings. Click Finish.

65. Check another host for the new datastore. If the new datastore is not listed among the existing datastores, click Rescan All.

66. Path Selection Policy changing for Datastores from Most Recently Used (VMware) to Round Robin (VMware). For checking and changing this parameter manually, the hosts should be connected to vCenter.

67. Multipathing configuration can be checked only in vCenter. To check it, click the Configure button, select the Storage Devices tab, select the device, and click the Edit Multipathing button.

68. Select the Round Robin (VMware) MPIO policy. Click OK to finish the configuration process.

CONCLUSION

Implementing scale-out approach gives a robust, well-balanced and easily scalable configuration. The scale-out approach gives IT administrators the following significant advantages:

Managing a scale-out cluster is very easy and requires no downtime. The IT infrastructure is easily upgraded by adding nodes. Old hardware can be taken out of use without downtime.

The performance impact in case of a failure is much lower compared to the scale-up approach. Should any node fail, all VMs and applications failover and re-balance between multiple nodes of the scale-out cluster, thus it doesn’t cause an IO hit on the production environment.

Implementing the scale-out approach will lower OPEX and TCO because the configuration doesn’t require time-consuming forklift upgrades or after-hours maintenance.