This guide covers the deployment of StarWind vHCI. In this guide, oVirt nodes and oVirt Engine with StarWind VSAN as shared storage will be deployed. StarWind VSAN will be deployed as Controller Virtual Machine and will use local storage to create replicated storage pool. The storage pool will be used as a shared iSCSI storage for oVirt Storage Domain.
The following prerequisites must be met:
1. 2 servers with local storage, which have direct network connections for Synchronization and iSCSI/StarWind heartbeat traffic.
2. Servers should have local storage available for CentOS installtion and StarWind VSAN Controller Virtual Machine. CVM utilizes local storage to create replicated shared storage connected to oVirt nodes via iSCSI.
3. DNS records of the oVirt Nodes and oVirt Engine should be added to DNS Server. It is required to use static IP addresses for both oVirt node and oVirt Engine.
4. StarWind HA devices require at least 2 separate network links between the nodes. The first one is used for iSCSI traffic, the second one is used for Synchronization traffic.
Note. The network interfaces on each node for Synchronization and iSCSI/StarWind heartbeat interfaces should be in different subnets and connected directly according to the network diagram above. Here, the 172.16.10.x subnet is used for the iSCSI/StarWind heartbeat traffic, while the 172.16.20.x subnet is used for the Synchronization traffic.
The diagram below illustrates the network and storage configuration of 2 node cluster:
Deploying oVirt node
1. Download oVirt (CentOS-based) node iso.
2. Boot from the downloaded ISO.
3. Choose “Installation Destination” and choose drive, which will be used for OS installation. Choose “Custom” storage configuration.
4. Click “Done” and partition your drive. At least 70 GiB should be left as available space. It will be used for StarWind VSAN Controller VM deployment.
5. Install oVirt Node on your host.
Preconfiguring oVirt hosts
1. Download vHCI archive with StarWind CVM and configuration files. https://tmplink.starwind.com/StarWind_vHCI_KVM.zip
2. Login to oVirt node via web browser using the following address: https://%IP_of_the_node:9090
3. Go to Networking page. Click Add Bridge.
4. Choose Network interface and Name of the bridge
Note. Create bridges for all network interfaces, which will be used for StarWind iSCSI and Synchronization traffic. In addition create separate bridge for management traffic.
5. Login to oVirt node via SSH
6. Create xml with following information for each Network, which will be used for StarWind.
<forward mode="bridge" />
<bridge name="%YourNetworkName%" />
7. Define network in virsh and mark it to autostart.
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf net-define bridged-iscsi.xml
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf net-start bridged-iscsi
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf net-autostart bridged-iscsi
NOTE. iSCSI network is used as an example.
8. Enable multipathing using the following command:
9. Create multipating configuration file for StarWind devices.
Add the following configuration to the file:
path_selector "round-robin 0"
hardware_handler "1 alua"
10. Restart multipathd service.
systemctl restart multipathd
11. Repeat steps 1-10 on every node.
Deploying StarWind VSAN Controller VM
1. Extract qcow2 file from the archive and upload it to the oVirt node via SFTP.
2. Login to oVirt node via SSH.
3. Create VM using the following command.
virt-install --connect qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf --name cvm01 --memory 8192 --vcpus 8 --disk /mnt/nvme01/StarWindCVM.qcow2 --import --os-variant centos7.0 --wait 0
4. Attach network interfaces for each network with the following command
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf attach-interface --domain cvm01 --type bridge --source iSCSI --target iscsi --model virtio --config –live
Note. Interfaces for iSCSI, Synchronization and Management traffic must be attached.
5. Check VM display using the following command.
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf domdisplay cvm01
NOTE. Depending on the output you should connect to the VM display via Spice or VNC console.
6. Create Virtual Disk for StarWind VSAN Highly Available Storage.
qemu-img create -f raw /%path-to-storage%/starwind-cvm-disk1 %filesize
7. Attach disk to a VM.
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf attach-disk cvm01 /%path-to-storage%/starwind-cvm-disk1 --cache none --config
8. Create tunnel to the host using the following command using SSH client.
ssh -L 5906:127.0.0.1:5906 root@%hostipaddress%
9. Configure VM to automatically start on boot.
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf autostart vmName
10. Connect to VM using Virt-Viewer and login using the following credentials.
• Username: user
• Password: rds123RDS
Note. Virt-Manager can be downloaded here:
11. Open the file corresponding to the Management interface using a text editor, for example: sudo nano /etc/sysconfig/network-scripts/ifcfg-eth0
12. Edit the file:
Change the line BOOTPROTO=dhcp to: BOOTPROTO=static
Add the IP settings needed to the file:
13. Now, open the web browser and enter the IP address of the VM. Log into the VM using the following default credentials:
• Username: user
• Password: rds123RDS
NOTE: Make sure to check the “Reuse my password for privileged tasks” box.
14. After a successful login, click Accounts on the left sidebar.
15. Select a user and click Set Password.
16. On the left sidebar, click Networking.
Here, the Management IP address of the StarWind Virtual SAN Virtual Machine can be configured, as well as IP addresses for iSCSI and Synchronization networks. In case the Network interface is inactive, click on the interface, turn it on, and set it to Connect automatically.
17. Configure IP addresses for each interface.
18. The result should look like in the picture below:
NOTE: It is recommended to set MTU to 9000 on interfaces dedicated for iSCSI and Synchronization traffic. Change Automatic to 9000, if required.
19. Access the Storage section. Locate the recently added disk in the Drives section and choose it.
20. The added disk does not have any partitions and filesystem. Press the Create Partition Table button to create the partition.
21. Press Create Partition to format the disk and set the mount point. The mount point should be as follows: /mnt/%yourdiskname%
22. On the Storage section, under Content, navigate to the Filesystem tab. Click Mount.
22. Repeat steps 2-22 on every node.
Configuring StarWind Management Console
1. Install StarWind Management Console on each server or on a separate workstation with Windows OS (Windows 7 or higher, Windows Server 2008 R2 and higher) using the installer available here.
NOTE: StarWind Management Console and PowerShell Management Library components are required.
2. Select the appropriate option to apply the StarWind License key.
Once the appropriate license key has been received, it should be applied to StarWind Virtual SAN service via Management Console or PowerShell.
3. Open StarWind Management Console and click Add Server.
4. Type the IP address of the StarWind Virtual SAN in the pop-up window and click OK.
5. Select the server and click Connect.
6. Click Apply Key… on the pop-up window.
7. Select Load license from file and click the Load button.
8. Select the appropriate license key.
As an alternative, PowerShell can be used. Open StarWind InstallLicense.ps1 script with PowerShell ISE as administrator. It can be found here:
C:\Program Files\StarWind Software\StarWind\StarWindX\Samples\powershell\InstallLicense.ps1
Type the IP address of StarWind Virtual SAN VM and credentials of StarWind Virtual SAN service (defaults login: root, password: starwind).
Add the path to the license key.
9. After the license key is applied, StarWind devices can be created.
NOTE: In order to manage StarWind Virtual SAN service (e.g. create ImageFile devices, VTL devices, etc.), StarWind Management Console can be used.
Creating StarWind devices
Note. oVirt hosted engine requires shared storage to be deployed. Create StarWind HA device for hosted engine at this stage. HA devices for VMs should be created after hosted engine deployment.
1. In the StarWind Management Console click to Add Device (advanced) button and open Add Device (advanced) Wizard.
2. Select Hard Disk Device as the type of device to be created.
3. Select Virtual Disk.
4. Specify a virtual disk Name, Location, and Size.
5. Select the Thick provisioned disk type and block size.
NOTE: Use 4096 sector size for targets, connected on Windows-based systems and 512 bytes sector size for targets, connected on Linux-based systems (ESXi/Xen/KVM).
6. Define a caching policy and specify a cache size (in MB). Also, the maximum available cache size can be specified by selecting the appropriate checkbox. Optionally, define the L2 caching policy and cache size.
7. Specify Target Parameters. Select the Target Name checkbox to enter a custom target name. Otherwise, the name is generated automatically in accordance with the specified target alias.
8. Click Create to add a new device and attach it to the target.
9. Click Close to finish the device creation.
10. The successfully added devices appear in the StarWind Management Console.
Select the Required Replication Mode
The replication can be configured using Synchronous “Two-Way” Replication mode:
Synchronous or active-active replication ensures real-time synchronization and load balancing of data between two or three cluster nodes. Such a configuration tolerates the failure of two out of three storage nodes and enables the creation of an effective business continuity plan. With synchronous mirroring, each write operation requires control confirmation from both storage nodes. It guarantees the reliability of data transfers but is demanding in bandwidth since mirroring will not work on high-latency networks.
Synchronous “Two-Way” replication
1. Right-click the recently created device and select Replication Manager from the shortcut menu.
2. Select the Add Replica button in the top menu.
3. Select Synchronous “Two-Way” replication as a replication mode.
4. Specify a partner Host name or IP address and Port Number.
Selecting the Failover Strategy
StarWind provides 2 options for configuring a failover strategy:
The Heartbeat failover strategy allows avoiding the “split-brain” scenario when the HA cluster nodes are unable to synchronize but continue to accept write commands from the initiators independently. It can occur when all synchronization and heartbeat channels disconnect simultaneously, and the partner nodes do not respond to the node’s requests. As a result, StarWind service assumes the partner nodes to be offline and continues operations on a single-node mode using data written to it.
If at least one heartbeat link is online, StarWind services can communicate with each other via this link. The device with the lowest priority will be marked as not synchronized and get subsequently blocked for the further read and write operations until the synchronization channel resumption. At the same time, the partner device on the synchronized node flushes data from the cache to the disk to preserve data integrity in case the node goes down unexpectedly. It is recommended to assign more independent heartbeat channels during the replica creation to improve system stability and avoid the “split-brain” issue.
With the heartbeat failover strategy, the storage cluster will continue working with only one StarWind node available.
The Node Majority failover strategy ensures the synchronization connection without any additional heartbeat links. The failure-handling process occurs when the node has detected the absence of the connection with the partner.
The main requirement for keeping the node operational is an active connection with more than half of the HA device’s nodes. Calculation of the available partners is based on their “votes”.
In case of a two-node HA storage, all nodes will be disconnected if there is a problem on the node itself, or in communication between them. Therefore, the Node Majority failover strategy requires the addition of the third Witness node or file share (SMB) which participates in the nodes count for the majority, but neither contains data on it nor is involved in processing clients’ requests. In case an HA device is replicated between 3 nodes, no Witness node is required.
With Node Majority failover strategy, failure of only one node can be tolerated. If two nodes fail, the third node will also become unavailable to clients’ requests.
Please select the required option:
NOTE: There are several options – RAM-based journal (default) and Disk-based journal with failure and continuous strategy, that allow to avoid full synchronization cases.
RAM-based (default) synchronization journal is placed in RAM. Synchronization with RAM journal provides good I/O performance in any scenario. Full synchronization could occur in the cases described in this KB: https://knowledgebase.starwindsoftware.com/explanation/reasons-why-full-synchronization-may-start/
Disk-based journal placed on a separate disk from StarWind devices. It allows to avoid full synchronization for the devices where it’s configured even when StarWind service is being stopped on all nodes.
Disk-based synchronization journal should be placed on a separate, preferably faster disk from StarWind devices. SSDs and NVMe disks are recommended as the device performance is defined by the disk speed, where the journal is located. For example, it can be placed on the OS boot volume.
It is required to allocate 2 MB of disk space for the synchronization journal per 1 TB of HA device size with a disk-based journal configured and 2-way replication and 4MB per 1 TB of HA device size for 3-way replication.
Failure journal – provides good I/O performance, as a RAM-based journal, while all device nodes are in a healthy synchronized state. If a device on one node went into a not synchronized state, the disk-based journal activates and a performance drop could occur as the device performance is defined by the disk speed, where the journal is located. Fast synchronization is not guaranteed in all cases. For example, if a simultaneous hard reset of all nodes occurs, full synchronization will occur.
Continuous journal – guarantees fast synchronization and data consistency in all cases. Although, this strategy has the worst I/O performance, because of frequent write operations to the journal, located on the disk, where the journal is located.
5. Click Change Network Settings.
6. Specify the interfaces for Synchronization and Heartbeat Channels. Click OK and then click Next.
7. In Select Partner Device Initialization Mode, select Synchronize from existing Device and click Next.
8. Click Create Replica. Click Finish to close the wizard.
The successfully added device appears in StarWind Management Console.
9. Follow the same procedure for the creation of other virtual disks that will be used as storage repositories.
There are two ways to configure Witness for 2-nodes StarWind HA device, created with Node Majority Failover Strategy: File Share (SMB) as Witness and additional server as Witness Node.
– Creating HA device with File SHare(SMB) as Witness:
SMB Witness is a file, located on SMB share, which can be accessed by both nodes and help them to eliminate the split-brain issue in case of synchronization connection interruption between the nodes. To set up the SMB file share as a Witness for 2-nodes HA device with Node Majority Failover Strategy, perform the actions, described on this page:
– Creating HA device with Witness Node:
1. Select the Node Majority failover strategy and click Next.
2. Choose Create new Partner Device and click Next.
3. Specify the partner device Location and modify the target name if necessary. Click Next. Select Synchronization Journal strategy and location and click Next.
4. In Network Options for Replication, press the Change network settings button and select the synchronization channel for the HA device.
5. In Specify Interfaces for Synchronization Channels, select the checkboxes with the appropriate networks and click OK. Then click Next.
6. Select Synchronize from existing Device as the partner device initialization mode.
7. Press the Create Replica button and close the wizard.
8. The added devices will appear in StarWind Management Console.
Repeat the steps above to create other virtual disks if necessary.
Adding Witness Node
Witness node can be configured on a separate host or as a virtual machine in a cloud. It requires StarWind Virtual SAN service installed on it.
NOTE: Since the device created in this guide is replicated between 2 active nodes with the Node Majority failover strategy, a Witness node must be added to it.
1. Open StarWind Management Console, right-click on the Servers field and press the Add Server button. Add a new StarWind Server which will be used as the Witness node and click OK.
2. Right-click on the HA device with the configured Node Majority failover policy and select Replication Manager and press the Add Replica button.
3. Select Witness Node.
4. Specify the Witness node Host Name or IP address. The default Port Number is 3261.
5. In Partner Device Setup, specify the Witness device Location. Optionally, modify the target name by clicking the appropriate button.
6. In Network Options for Replication, select the synchronization channel with the Witness node by clicking the Change Network Settings button.
7. Specify the interface for Synchronization and Heartbeat and click OK.
8. Click Create Replica and then close the wizard.
9. Repeat the steps above to create other virtual disks if necessary.
NOTE: To extend an Image File or a StarWind HA device to the required size, please check the article below:
Deploying self-hosted engine
1. Login to oVirt Node, which will be used for self-hosted engine deployment, via SSH.
Note. Hosted engine will be deployed as a VN on a shared shared storage of the node chosen for the deployement. Second node should be added separately to oVirt Engine.
2. Use sftp to upload the following files to oVirt Node.
Upload fix_ansible.yml, fix_check_logging.yml, apply_starwind_brand.yml ansible hooks using sftp client to the /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/hooks/enginevm_before_engine_setup/ directory
Upload check_logging_collectors.yml to /opt/ directory
Upload 01-starwind.brand directory and its contents to /opt/ directory.
3. Remove management network from virsh.
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf net-destroy bridged-mgmt
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf net-undefine bridged-mgmt
4. Login to oVirt node via web. Go to Networking page and choose management bridge. Click Delete.
Note. Management Network in virsh and Management Bridge must be removed before hosted engine deployment. Repeat steps 3-4 on all nodes.
5. Run the following command to install oVirt Engine.
hosted-engine --deploy --4
6. Follow the installation wizard and deploy Self-Hosted engine on iSCSI storage, which was created.
Note. DNS records of the oVirt Node and oVirt Engine should be added before oVirt engine deployment. It is required to use static IP addresses for both oVirt node and oVirt Engine.
It is recommended to choose default settings during the deployment. IP address and iSCSI drive should be chosen.
7. Login to oVirt Engine after successful installation deployment using the following credentials:
Password: Configured during oVirt engine deployment.
8. After deployment connect to all paths of iSCSI LUN on every node in the cluster.
iscsiadm -m discovery -t st -p 172.16.10.10
iscsiadm -m node -l -T iqn.2008-08.com.starwindsoftware:cvm01-lun01
iscsiadm -m discovery -t st -p 172.16.10.20
iscsiadm -m node -l -T iqn.2008-08.com.starwindsoftware:cvm02-lun01
Adding hosts to oVirt Engine
1. Login to oVirt Engine and go to Compute -> Hosts
2. Type Name, Hostname or IP, Password of the host, which will be added. Click OK.
3. Repeat steps 1-2 for all hosts, which were not added to oVirt engine.
Defining Management Network for StarWind CVM
1. Login to oVirt node via SSH
2. Attach Network interface to StarWind VM.
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf attach-interface --domain cvm01 --type bridge --source vdsm-ovirtmgmt --target ovirtmgmt --model virtio --config –live
3. Repeat steps 1-2 on all nodes.
Provisioning StarWind HA Storage to oVirt Hosts
1. Login to oVirt engine and open Storage -> Domain. Clock New Domain.
2. Choose Storage Type – iSCSI, Host and Name of Storage Domain. Discover targets via iSCSI links, which were previously configured. Click Login All.
3. Add LUN from each iSCSI target. Click OK.
4. Storage Domain will be added to the list of Domain and can be used as a storage for VMs.
5. Login to each host and verify that multipathing policy has been applied using the following command.