Search

StarWind Virtual SAN: Configuration Guide for Proxmox Virtual Environment [KVM], VSAN Deployed as a Controller VM using PowerShell CLI

Annotation

Relevant products
This guide applies to StarWind Virtual SAN, StarWind Virtual SAN Free (starting from version 1.2xxx – Oct. 2023).

Purpose

This document outlines how to configure a Proxmox Cluster using StarWind Virtual SAN (VSAN), with VSAN running as a Controller Virtual Machine (CVM). The guide includes steps to prepare Proxmox hosts for clustering, configure physical and virtual networking, and set up the Virtual SAN Controller Virtual Machine.

For more information about StarWind VSAN architecture and available installation options, please refer to the StarWind Virtual (vSAN) Getting Started Guide.

Audience

This technical guide is intended for storage and virtualization architects, system administrators, and partners designing virtualized environments using StarWind Virtual SAN (VSAN).

Expected result

The end result of following this guide will be a fully configured high-availability Proxmox Cluster that includes virtual machine shared storage provided by StarWind VSAN.

Prerequisites

StarWind Virtual SAN system requirements

Prior to installing StarWind Virtual SAN, please make sure that the system meets the requirements, which are available via the following link:
https://www.starwindsoftware.com/system-requirements

Recommended RAID settings for HDD and SSD disks:
https://knowledgebase.starwindsoftware.com/guidance/recommended-raid-settings-for-hdd-and-ssd-disks/

Please read StarWind Virtual SAN Best Practices document for additional information:
https://www.starwindsoftware.com/resource-library/starwind-virtual-san-best-practices

 

Solution diagram

The diagram below illustrates the network and storage configuration of the solution:

2-node KVM (967x648)@2x

1. Make sure that a oVirt engine is installed on a separate host.

2. Deploy oVirt on each server and add them to oVirt engine.

3. Define at least 2x network interfaces on each node that will be used for the Synchronization and iSCSI/StarWind heartbeat traffic. Do not use ISCSI/Heartbeat and Synchronization channels over the same physical link. Synchronization and iSCSI/Heartbeat links can be connected either via redundant switches or directly between the nodes (see diagram above).

4. Separate Logical Networks should be created for iSCSI and Synchronization traffic based on the selected before iSCSI and Synchronization interfaces. Using oVirt engine Netowrking page create two Logical Networks: one for the iSCSI/StarWind Heartbeat channel (iSCSI) and another one for the Synchronization channel (Sync).

5. Add physical NIC to Logical network on each host and configure static IP addresses. In this document, the 172.16.10.x subnet is used for iSCSI/StarWind heartbeat traffic, while 172.16.20.x subnet is used for the Synchronization traffic.
NOTE: In case NIC supports SR-IOV, enable it for the best performance. Contact support for additional details.

Deploying Starwind Virtual SAN CVM

1. Download StarWind VSAN CVM KVM: VSAN by StarWind: Overview 

2. Extract the VM StarWindAppliance.qcow2 file from the downloaded archive.

3. Upload StarWindAppliance.qcow2 file to the Proxmox Host via any SFTP client (e.g. WinSCP) to /root/ directory.02_SFTP

4. Create a VM without OS. Login to Proxmox host via Web GUI. Click Create VM.

02_Create_VM

5. Choose node to create VM. Enable Start at boot checkbox and set Start/Shutdown order to 1. Click Next. 03_VM_general

6. Choose Do not use any media and choose Guest OS Linux. Click Next.  04_Create_VM

6. Specify system options. Choose Machine type q35 and check the Qemu Agent box. Click Next. 04_VM_system

7. Remove all disks from the VM. Click Next.06_Create_VM

8. Assign 8 cores to the VM and choose Host CPU type. Click Next.05_VM_CPU

9. Assign at least 8GB of RAM to the VM. Click Next. 08_Create_VM

10. Configure Management network for the VM. Click Next. 09_Create_VM_Networking

11. Confirm settings. Click Finish. 06_VM_Confirm

12. Connect to Proxmox host via SSH. Attach StarWindAppliance.qcow2 file to the VM.

13. Open VM and go to Hardware page. Add unused SCSI disk to the VM.

14. Attach Network interfaces for Synchronization and iSCSI/Heartbeat traffic. 11_Add_device_VM

15. Open Options page of the VM. Select Boot Order and click Edit.
07_1_Boot_option

16. Move scsi0 device as #1 to boot from.07_2_Boot_option

17. Repeat all the steps from this section on other Proxmox hosts.

Configuring StarWind Virtual SAN VM settings

1. Open the VM console and check the IP address received via DHCP (or which was assigned manually). 14_VM_Console

Another alternative is to log into the VM via console and assign static IP using nmcli if there is no DHCP.

2. Now, open the web browser and enter the IP address of the VM. Log into the VM using the following default credentials:  

  • Username: user  
  • Password: rds123RDS  
  • NOTE: Make sure to check the “Reuse my password for privileged tasks” box.

VM web login3. After a successful login, click Accounts on the left sidebar. 

4. Select a user and click Set Password.  

VM password5. On the left sidebar, click Networking. 

VM networks

Here, the Management IP address of the StarWind Virtual SAN Virtual Machine can be configured, as well as IP addresses for iSCSI and Synchronization networks. In case the Network interface is inactive, click on the interface, turn it on, and set it to Connect automatically.

6. Click on Automatic (DHCP) to set the IP address (DNS and gateway – for Management).

VM IP static

7. The result should look like in the picture below:

VM networks

NOTE: It is recommended to set MTU to 9000 on interfaces dedicated for iSCSI and Synchronization traffic. Change Automatic to 9000, if required. VM netwroks ISCSI

8. Alternatively, log into the VM via the Proxmox console and assign a static IP address by editing the configuration file of the interface located by the following path: /etc/sysconfig/network-scripts  

9.Open the file corresponding to the Management interface using a text editor, for example: sudo nano /etc/sysconfig/network-scripts/ifcfg-eth0  

10. Edit the file:  

  • Change the line BOOTPROTO=dhcp to: BOOTPROTO=static  
  • Add the IP settings needed to the file:  
  • IPADDR=192.168.12.10  
  • NETMASK=255.255.255.0  
  • GATEWAY=192.168.12.1  
  • DNS1=192.168.1.1  

11. Restart the interface using the following cmdlet: sudo ifdown eth0, sudo ifup eth0, or restart the VM. 

12. Change the Host Name from the System tab by clicking on it:  VM name

13. Change System time and NTP settings if required: VM NTP

14. Repeat the steps above on each StarWind VSAN VM.  

Configuring Storage

StarWind Virtual SAN for vSphere can work on top of Hardware RAID or Linux Software RAID (MDADM) inside of the Virtual Machine.
Please select the required option:

Configuring storage with hardware RAID

1. Open VM Hardware page. Click Add -> Hard Disk. 13_Add_disk_VM
Note. RAID controller can be passed through to the VM. https://pve.proxmox.com/wiki/PCI(e)_Passthrough

2. Set disk size and storage location. Click Add. 12_Add_disk_VM

3. Start StarWind VSAN CVM.

4. Login to StarWind VSAN VM web console and access the Storage section. Locate the recently added disk in the Drives section and choose it.
VM Storage
5.
The added disk does not have any partitions and filesystem. Press the Create Partition Table button to create the partition.
VM Creating partition
6. Press Create Partition to format the disk and set the mount point. The mount point should be as follows: /mnt/%yourdiskname%
VM Formatting
7.
On the Storage section, under Content, navigate to the Filesystem tab. Click Mount.

 

 

Configuring StarWind Management Components

1. Install StarWind Management Console on each server or on a separate workstation with Windows OS (Windows 7 or higher, Windows Server 2008 R2 and higher) using the installer available here.
NOTE: StarWind Management Console and PowerShell Management Library components are required.

2. Open StarWind InstallLicense.ps1 script with PowerShell ISE as administrator. It can be found here:
C:\Program Files\StarWind Software\StarWind\StarWindX\Samples\powershell\InstallLicense.ps1
Type the IP address of StarWind Virtual SAN VM and credentials of StarWind Virtual SAN service (defaults login: root, password: starwind).
Add the path to the license key.Configuring StarWind Management Console
3. After the license key is applied, StarWind devices can be created.
NOTE: In order to manage StarWind Virtual SAN service (e.g. create ImageFile devices, VTL devices, etc.), StarWindX PowerShell library can be used. StarWind Management Console can be used to monitor StarWind Virtual SAN Service.

Creating StarWind HA LUNs using PowerShell

1. Open PowerShell ISE as Administrator.

2.  Open StarWindX sample CreateHA_2.ps1 using PowerShell ISE. It can be found here:
C:\Program Files\StarWind Software\StarWind\StarWindX\Samples\

2.  Configure script parameters according to the following example:

Detailed explanation of script parameters:

-addr, -addr2 — partner nodes IP address.
Format: string. Default value: 192.168.0.1, 192.168.0.1
allowed values: localhost, IP-address
-port, -port2 — local and partner node port.
Format: string. Default value: 3261
-user, -user2 — local and partner node user name.
Format: string. Default value: root
-password, -password2 — local and partner node user password.
Format: string. Default value: starwind

#common
-initMethod
Format: string. Default value: Clear
-size – set size for HA-devcie (MB)
Format: integer. Default value: 12
-sectorSize – set sector size for HA-device
Format: integer. Default value: 512
allowed values: 512, 4096
-failover – set type failover strategy
Format: integer. Default value: 0 (Heartbeat)
allowed values: 0, 1 (Node Majority)
-bmpType – set bitmap type, is set for both partners at once
Format: integer. Default value: 1 (RAM)
allowed values: 1, 2 (DISK)
-bmpStrategy – set journal strategy, is set for both partners at once
Format: integer. Default value: 0
allowed values: 0, 1 – Best Performance (Failure), 2 – Fast Recovery (Continuous)

#primary node
-imagePath – set path to store the device file
Format: string. Default value: My computer\C\starwind”. For Linux the following format should be used: “VSA Storage\mnt\mount_point”
-imageName – set name device
Format: string. Default value: masterImg21
-createImage – set create image file
Format: boolean. Default value: true
-targetAlias – set alias for target
Format: string. Default value: targetha21
-poolName – set storage pool
Format: string. Default value: pool1
-aluaOptimized – set Alua Optimized
Format: boolean. Default value: true
-cacheMode – set type L1 cache (optional parameter)
Format: string. Default value: wb
allowed values: none, wb, wt
-cacheSize – set size for L1 cache in MB (optional parameter)
Format: integer. Default value: 128
allowed values: 1 and more
-syncInterface – set sync channel IP-address from partner node
Format: string. Default value: “#p2={0}:3260”
-hbInterface – set heartbeat channel IP-address from partner node
Format: string. Default value: “”
-createTarget – set creating target
Format: string. Default value: true
Even if you do not specify the parameter -createTarget, the target will be created automatically.
If the parameter is set as -createTarget $false, then an attempt will be made to create the device with existing targets, the names of which are specified in the -targetAlias (targets must already be created)
-bmpFolderPath – set path to save bitmap file
Format: string.

#secondary node
-imagePath2 – set path to store the device file
Format: string. Default value: “My computer\C\starwind”. For Linux the following format should be used: “VSA Storage\mnt\mount_point”
-imageName2 – set name device
Format: string. Default value: masterImg21
-createImage2 – set create image file
Format: boolean. Default value: true
-targetAlias2 – set alias for targetFormat: string.
Default value: targetha22
-poolName2 – set storage pool
Format: string. Default value: pool1
-aluaOptimized2 – set Alua Optimized
Format: boolean. Default value: true
-cacheMode2 – set type L1 cache (optional parameter)
Format: string. Default value: wb
allowed values: wb, wt
-cacheSize2 – set size for L1 cache in MB (optional parameter)
Format: integer. Default value: 128
allowed values: 1 and more
-syncInterface2 – set sync channel IP-address from partner node
Format: string. Default value: “#p1={0}:3260”
-hbInterface2 – set heartbeat channel IP-address from partner node
Format: string. Default value: “”
-createTarget2 – set creating target
Format: string. Default value: true
Even if you do not specify the parameter -createTarget, the target will be created automatically.If the parameter is set as -createTarget $false, then an attempt will be made to create the device with existing targets, the names of which are specified in the -targetAlias (targets must already be created)
-bmpFolderPath2 – set path to save bitmap file
Format: string.

Selecting the Failover Strategy

StarWind provides 2 options for configuring a failover strategy:

Heartbeat

The Heartbeat failover strategy allows avoiding the “split-brain” scenario when the HA cluster nodes are unable to synchronize but continue to accept write commands from the initiators independently. It can occur when all synchronization and heartbeat channels disconnect simultaneously, and the partner nodes do not respond to the node’s requests. As a result, StarWind service assumes the partner nodes to be offline and continues operations on a single-node mode using data written to it.
If at least one heartbeat link is online, StarWind services can communicate with each other via this link. The device with the lowest priority will be marked as not synchronized and get subsequently blocked for the further read and write operations until the synchronization channel resumption. At the same time, the partner device on the synchronized node flushes data from the cache to the disk to preserve data integrity in case the node goes down unexpectedly. It is recommended to assign more independent heartbeat channels during the replica creation to improve system stability and avoid the “split-brain” issue.
With the heartbeat failover strategy, the storage cluster will continue working with only one StarWind node available.

Node Majority

The Node Majority failover strategy ensures the synchronization connection without any additional heartbeat links. The failure-handling process occurs when the node has detected the absence of the connection with the partner.
The main requirement for keeping the node operational is an active connection with more than half of the HA device’s nodes. Calculation of the available partners is based on their “votes”.
In case of a two-node HA storage, all nodes will be disconnected if there is a problem on the node itself, or in communication between them. Therefore, the Node Majority failover strategy requires the addition of the third Witness node or file share (SMB) which participates in the nodes count for the majority, but neither contains data on it nor is involved in processing clients’ requests. In case an HA device is replicated between 3 nodes, no Witness node is required.
With Node Majority failover strategy, failure of only one node can be tolerated. If two nodes fail, the third node will also become unavailable to clients’ requests.
Please select the required option:

Connecting StarWind HA Storage to Proxmox Hosts

1. Connect to Proxmox host via SSH and install multipathing tools.

2. Edit nano /etc/iscsi/initiatorname.iscsi setting the initiator name. 09_initiator_name

3. Edit /etc/iscsi/iscsid.conf setting the following parameters:

Note. node.startup = manual is the default parameter, it should be changed to node.startup = automatic.

4. Create file /etc/multipath.conf using the following command:

5. Edit /etc/multipath.conf adding the following content:

6. Run iSCSI discovery on both nodes:

7. Connect iSCSI LUNs:

8. Get WWID of StarWind HA device:

9. The wwid must be added to the file ‘/etc/multipath/wwids’. To do this, run the following command with the appropriate wwid:

10. Restart multipath service.

11. Check if multipathing is running correctly:

12. Repeat steps 1-11 on every Proxmox host.

13. Create LVM PV on multipathing device:

where mpatha – alias for StarWind LUN
14. Create VG on LVM PV:

15. Login to Proxmox via Web and go to Datacenter -> Storage. Add new LVM storage based on VG created on top of StarWind HA Device. Enable Shared checkbox. Click Add.
15_Add_LVM_iSCSI

16. Login via SSH to all hosts and run the following command:

 

Hey! Don’t want to tinker with configuring all the settings? Looking for a fast-track to VSAN deployment?
Dmytro Malynka
Dmytro Malynka StarWind Virtual SAN Product Manager
We've got you covered! First off, all trial and commercial StarWind customers are eligible for installation and configuration assistance services. StarWind engineers will help you spin up the PoC setup to properly evaluate the solution and will assist with the production deployment after the purchase. Secondly, once deployed, StarWind VSAN is exceptionally easy to use and maintain. Hard to believe? Wait no more and book a StarWind VSAN demo now to see it in action!