StarWind Hybrid Cloud for Azure is a hybrid cloud solution allowing you to extend OnPremises virtualization workloads from your datacenter to Azure public cloud. With its implementation, you can assemble the on-premises servers and a Azure VMs in well-known Hyper-V Failover Cluster. The Hybrid Cloud is orchestrated using StarWind Management Console, Hyper-V and SCVMM, requiring no experience in the Azure platform.
StarWind Virtual SAN distributes highly available shared storage replicating the data between locations. Delivering active-active storage, StarWind provides fault-tolerant Disaster Recovery site in public cloud to meet the required RTO and RPO.
A full set of up-to-date technical documentation can always be found here, or by pressing the Help button in the StarWind Management Console.
The picture below illustrates the interconnection diagram of the StarWind Hybrid Cloud feature described in the guide.
Make sure you have met the prerequisites for deploying StarWind Hybrid Cloud:
- An Azure Subscription (you can get a free trial here).
- Selected Azure location that allows nested virtualization here and here. Its beneficial to choose the location with minimal network latency. It can be checked here
- Verify that you have an externally facing public IPv4 address for your VPN device.
- At least 100Mbps network connection to the Internet. The 1Gbps bandwidth link is highlyrecommended.
- Deployed Active Directory structure and DNS at on-premises.
- Windows Server 2016 installed on the server that is going to be clustered.
The following values in this document are used as the examples. You can use these values to create an environment, or refer to them to better understand the examples in this guidance.
- Virtual Network Name: AzureVNet1
- Azure Address Space: 10.10.0.0/25
- Azure Subnet: 10.10.0.0/26
- Azure GatewaySubnet: 10.10.0.128/27
- Resource Group: AzureRG
- Location: East US
- DNS Server: Optional. The IP address of your DNS server.
- Virtual Network Gateway Name: AzureVNet1GW
- Public IP: AzureVNet1GW-IP
- VPN Type: Route-based
- Connection Type: Site-to-site (IPsec)
- Gateway Type: VPN
- Local Network Gateway Name: On-Premise
- Connection Name: AzureVNet1toOnPremise
Creating a Site-to-Site connection in the Azure portal
From a browser, navigate to the Azure portal and sign in with your Azure account.
Creating a Resource Group
To begin with, create a Resource group in the Azure portal. It’s a container that collects related resources for an Azure solution, or those resources that ought to be managed as a group. Make sure you have selected the location with minimal network latency.
In the portal, click New. In the Search the marketplace field, type ‘Virtual Network’. Locate Resource group from the returned list and click it to open the Resource group blade. Near the bottom of the Resource group blade, click Create.
Creating a virtual network
Navigate to Resource group and select it. In the top menu, select Add. In the Search the marketplace field, type ‘Virtual Network’. Locate Virtual Network from the returned list and click it to open the Virtual Network blade. Use Resource Manager as a deployment model
Creating gateway subnet
In your Resource group, click on the virtual network. Open Subnets in the sidebar menu. Create a Gateway subnet.
Creating VPN gateway
Click + on the left side of the portal page and type ‘Virtual Network Gateway’ in the search field. In Results, locate and click Virtual network gateway. At the bottom of the ‘Virtual network gateway’ blade, click Create. This opens the Create virtual network gateway blade.
On the Create virtual network gateway blade, specify the values for your virtual network gateway.
- Name: Name your gateway.
- Gateway type: Select VPN. VPN gateways use the virtual network gateway with VPN type.
- VPN type: Select the Route-based VPN type.
- SKU: Select the gateway SKU from the dropdown menu. For more information about gateway SKUs, see Gateway SKUs.
- Resource Group: click ‘use existing’ and then select your group.
- Location: Point to the location where your virtual network is located.
- Virtual network: Choose the virtual network to which you want to add this gateway.
- Public IP address: The ‘Create public IP address’ blade creates a public IP address object.
- First, click Public IP address to open the ‘Choose public IP address’ blade, then click +Create New to open the ‘Create public IP address’ blade.
- Next, input a Name for your public IP address, then click OK at the bottom of this blade to save your changes.
Verify the settings. Click Create to begin creating the VPN gateway. The settings will be validated and you’ll see the “Deploying Virtual network gateway” title on the dashboard. Creation of a gateway can take up to 45 minutes.
Creating the local network gateway
The local network gateway refers to an on-premises location. Provide the site with a name by which Azure can refer to it, and specify the IP address of the on-premises VPN device to which you will create a connection. Specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes you have specified are the prefixes located on your on-premises network.
Navigate to All resources, click +Add.
In the search box, locate and select Local network gateway. Then click Create to open the Create local network gateway blade.
On the Create local network gateway blade, specify the values for your local network gateway
- Name: Specify the name for your local network gateway object.
- IP address: This is the public IP address of the VPN device that you want Azure to connect to.
- Address Space refers to the address ranges for the network that this local network represents.
- Subscription: Verify that the correct subscription is shown.
- Resource Group: Pick the resource group that you want to use.
- Location: Specify the location where this object will be created.
When you have finished specifying the values, click Create at the bottom of the blade to create the local network gateway.
Creating the VPN connection
To create the Site-to-Site VPN connection between your virtual network gateway and onpremises VPN device, open the virtual network gateway blade. On the blade for AzureVNet1GW, click Connections. At the top of the Connections blade, click +Add to open the Add connection blade.
On the Add connection blade, fill in the values to create your connection
- Name: Name your connection. In this guide, the AzureVNet1toOnPremise name is used.
- Connection type: Select Site-to-site (IPSec).
- Virtual network gateway: The value is fixed because you are connecting from this gateway.
- Local network gateway: Click Choose a local network gateway and select the local network gateway that you want to use. In this guide, On-Premise is used.
- Shared Key: the value here must match the value that you are using for your local onpremises VPN device. In this guide, ‘StrongPasswordKey’ is used, but you can (and should) use something more complex.
Click OK to create your connection. You’ll see Creating Connection flash on the screen. You can view the connection process in the Connections blade of the virtual network gateway. The Status will go from Unknown to Connecting, and then to Succeeded.
RRAS server setup
Installing the RRAS server
In order to setup the VPN tunnel between on-premises and Azure, below, the Routing and Remote Access Service (RRAS) role is configured on Windows Server 2016, which in this guide also stands for AD DC and DNS roles. In case you are going to use an external VPN device, check its compatibility with Azure here.
For the RRAS installation, Windows Server 2016 is used. Open Server Manager. Select Manage -> Add Roles and Features. In the Add Roles and Features Wizard, perform the following actions:
- Before You Begin: Click Next
- Installation Type: Role-based -> Click Next
- Server Selection: Select a server from the server pool -> RRAS-Server -> Click Next
- Server Roles: Check Remote Access -> Click Next
- Features: Click Next
- Remote Access: Click Next
- Role Services:
- Direct Access and VPN (RAS)
- Click Add Features on the pop-up window
- Click Next
- Web Server Role (IIS): Click Next
- Role Services
- Accept Defaults: Click Next
- Confirmation: Click Install
Configuring the RRAS server
Open Routing and Remote Access. Select “Secure connection between two private networks”. Complete the configuration with the default settings.
Demand-Dial Interface Wizard pop-ups. Specify a name for the Interface. In this guide, “OnPremise-To-Azure” is used.
For the Static Routes, we want to add the route. Add the route of our Azure-VMs subnet.
Now, in Routing and Remote Access > Network Interfaces, right-click “On-Premise-To-Azure” and choose Properties. Click on the Options tab and set the connection type to persistent. On the Security tab, set the pre-shared key you have setup on Azure earlier.
Now, you can right-click the On-Premise-To-Azure connection and select connect. It will be displayed as connected in RRAS as shown on the following screenshot.
Up in Azure, you should also see the connection status as Connected.
Double-check the Static Routes. The configuration should look the same as on the following screenshot.
The VPN connection is configured and hosts on both sides are able to communicate. Now,
proceed with the Failover Cluster setup.
Opening Ports in the Firewall
1. Open Windows Firewall, click the Exceptions tab, and then click Add Port.
2. Click Inbound Rules and open `New Rule…` wizard on left sidebar.
3. Allow connection for 500, 4500, 1701, 50 UPD ports to allow VPN traffic to pass through.
4. Repeat procedure for Outbound Rules.
Preconfiguring the On-Premise server
This document assumes that you have a domain controller and you have added the On-Premise server to the domain. It also assumes that you have installed the Failover Clustering and Multipath I/O features, as well as the Hyper-V role on the local server. These actions can be performed using Server Manager (the Add Roles and Features menu item).
Download and install StarWind Virtual SAN on the local server. The latest StarWind build can be found by following this link.
Enabling Multipath Support
1. Open the MPIO manager: Start->Administrative Tools->MPIO.
2. Go to the Discover Multi-Paths tab.
3. Tick the Add support for iSCSI devices checkbox.
4. Click Add When prompted to restart the server, click Yes to proceed.
Deploying StarWind VM in Azure
From a browser, navigate to the Azure portal and search for StarWind Virtual SAN VM. Select `Bring your own license` (BYOL) in case you have a StarWind license. To obtain a license please contact Sales.
Otherwise, pick StarWind Virtual SAN Hourly. Alike BYOL, it’s a preinstalled Azure VM with the Hyper-V role, MPIO and Failover Cluster features but StarWind is shipped on a per-hour license basis.
Create a StarWind VM using Resource Manager.
Enter a Name for the virtual machine.
Enter a User name and a strong Password that are used to create a local account on the VM. Select an existing Resource group or type the name for a new one. In the example, AzureRG is the name of the resource group.
Select an Azure datacenter Location where you want the VM to run. In the example, East US is the location
The Size blade identifies the configuration details of the VM, and lists various choices that include OS, number of processors, disk storage type, and estimated monthly usage costs. For the StarWind Hybrid Cloud implementation, only Dv3 and Ev3 VM sizes are supported.
Choose a VM size, and then click Select to continue. In this example, D2S_V3 Standard is the VM size.
The Settings blade requests storage and network options. Click Network security group. Add 500, 4500, 1701, 50 UPN ports to an inbound and an outbound rule to allow VPN traffic to pass through. Mark High availability option as None. Optionally, you can make changes like Enable Auto-shutdown, and Monitoring. Click OK once finished.
The Summary blade lists the settings specified in the previous blades. Click OK when you’re ready to create the image. The VM deployment takes about 15 minutes.
Adding a network adapter
While it’s possible to create NICs in the Azure management portal, you can not attach them to VMs there. That can only be done using PowerShell. Make sure you have the latest version of Microsoft Azure PowerShell installed on your PC. The latest release can be installed using the Web Platform Installer.
The first step is to open Windows PowerShell ISE and log in to your Azure subscription using the Login-AzureRmAccount cmdlet. Define the following variables for the VM, network, and other parameters that will be required.
$vmName = ‘sw-sed-azure-01’
$vnetName = ‘AzureVNet1’
$RG = ‘AzureRG’
$subnetName = ‘default’
$nicName = ‘sw-sed-azure-01011’
$location = ‘East US
$ipAddress = ’10.10.0.10’
Create an object for the VM using Get-AzureRmVM:
$VM = Get-AzureRmVM -Name $vmName -ResourceGroupName $RG
Now, create an object for the virtual network using Get-AzureRmVirtualNetwork:
$vnet = Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $RG
The new NIC needs to be connected to a subnet ($subnetName), but first, the subnet ID must be received before starting to work with it in PowerShell: $subnetID = (Get-AzureRmVirtualNetworkSubnetConfig -Name $subnetName -VirtualNetwork $vnet).Id
Finally, a new NIC can be created using the New-AzureRmNetworkInterface cmdlet: New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $RG -Location $location -SubnetId $subnetID -PrivateIpAddress $ipAddress A new NIC has been created and can now be attached to the VM. First, create an object for the NIC using the Get-AzureRmNetworkInterface cmdlet: $NIC = Get-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $RG
Add the NIC to the VM configuration:
$VM = Add-AzureRmVMNetworkInterface -VM $VM -Id $NIC.Id
To check that the configuration has been changed to include the new NIC, run the command below to see the list of NICs attached to the VM:
Assign the first NIC as the primary:
$VM.NetworkProfile.NetworkInterfaces.Item(0).Primary = $true
Lastly, commit the new configuration using the Update-AzureRmVM cmdlet:
Update-AzureRmVM -VM $VM -ResourceGroupName $RG
1. Navigate to VM -> Networking.
2. Select the first Network interface.
3. Open IP configuration Page from sidebar menu
4. Enable IP forwarding settings and Save the configuration.
Azure requires a Storage account to store VM disks.
1. On the Hub menu, select New -> Storage -> Storage account.
2. Enter a name for your storage account.
3. Specify the deployment model to be used: Resource Manager or Classic. Resource Manager is the recommended deployment model.
4. Select the type of storage account: General purpose. Then specify the performance tier: Standard or Premium with regards to On-Premises storage configuration. The default is Standard. For more details on standard and premium storage accounts, see Introduction to Microsoft Azure Storage and Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads.
5. Select the replication option, subscription, and region.
6. Click Create to create the storage account.
7. Navigate to StarWind VM -> Disks.
8. Click ‘+Add data disk’
9. Specify disk Account type and Size according to On-Premises storage configuration
10. Select Storage container
11. Click OK to finish the setup.
Install roles and Features
1. Connect to VM using Remote Desktop Connection.
2. Launch Server Manager and click Add New Roles and Features.
3. Check the Hyper-V role installation.
4. Check Failover Clustering and Multipath I/O features installation
5. Complete the installation on Azure VM
Join the Azure VM to the on-premises domain
1. Press `Windows key + X` at Azure VM to invoke the context menu, and hit `System` field.
2. In the opened window. click Change settings to open System properties
3. Click Change.. button. Mark the domain field and type the local domain name.
Hit the OK button. Confirm the domain join by entering domain administrator’s credentials.
4. Accept system reboot to complete the join.
Configuring Shared Storage
1. Launch the StarWind Management Console: double-click the StarWind tray icon.
2. StarWind Management console will ask you to specify the default storage pool on the server you’re connecting to for the first time. Please configure the storage pool to use one of the volumes you’ve prepared earlier. All the devices created through the Add Device wizard will be stored on it. Should you decide to use an alternative storage path for your StarWind virtual disks, please use the Add Device (advanced)strong> menu item
Press Yes button to configure the storage pool. Should you require to change the storage pool destination, press Choose path… and point the browser to the necessary disk.
NOTE: Each of the arrays, which will be used by StarWind Virtual SAN to store virtual disk images, has to meet the following requirements:
- Initialized as GPT
- Have a single NTFS-formatted partition
- Have a drive letter assigned
3. Press the Add Device (advanced) button on the toolbar.
4. Add Device Wizard will appear. Select Hard disk device and click Next.
5. Select Virtual disk and click Next.
6. Specify the virtual disk location, name, and size. Click Next
7. Specify virtual disk options. Click Next
8. Define the caching policy and specify the cache size (in MB). Click Next
9. Define the Flash Cache Parameters policy and size if necessary. Click Next to continue
NOTE: It is strongly recommended to use a SSD-based storage for “Flash Cache” caching.
10. Specify target parameters.
Select the Target Name checkbox to enter a custom name of a target. Otherwise, the name will be generated automatically based on the target alias. Click Next to continue.
11. Click Create to add a new device and attach it to the target. Then, click Close to close the wizard.
12. Right-click on the servers field and select Add Server. Add a new StarWind Server, which is an Azure VM. Click OK to continue.
13. Right-click on the device you have just created, and select Replication Manager. Replication Manager Window will appear. Press the Add Replica button.
14. Select Synchronous two-way replication. Click Next to proceed
15. Specify the partner server host name or IP address. The default StarWind management port is 3261. If you have configured a different port, please type it in the Port number field. Click Next.
16. Select Heartbeat Failover Strategy. Click Next to proceed.
17. Choose Create new Partner Device. Click Next.
18. Specify the partner device location if necessary. You can also modify the target name of the device. Click Next.
19. On this screen, you can select the synchronization and heartbeat channels for the HA device. You can also modify the ALUA settings. Click Change network settings. s
20. Click Allow Free Select Interfaces to be able to specify NICs from different subnetworks. Specify the interfaces for synchronization and Heartbeat. Click OK. Then click Next.
21. Select partner device initialization mode, set Do not Synchronize. Click Next.
NOTE: Use this type of synchronization for adding partner only to the device which doesn’ contain any data.
22. Press the Create Replica button. Then click Close to close the wizard.
23. The added device will appear in the StarWind Management Console.
24. Repeat the steps 3 – 23 for the remaining virtual disk that will be used for File Share.
Discovering Target Portals
In this chapter, the previously created disks will be connected to the servers that are to be added
to the cluster:
1. Launch Microsoft iSCSI Initiator: Start > Administrative Tools > iSCSI Initiator or iscsicpl from the command line interface. The iSCSI Initiator Properties window appears
2. Navigate to the Discovery tab.
3. Click the Discover Portal button. Discover Target Portal dialog appears. Type in 127.0.0.1. Click the Advanced button. Select Microsoft ISCSI Initiator as your Local adapter and select your Initiator IP (leave default for 127.0.0.1).
Click OK. Then click OK again to complete the Target Portal discovery.
4. Click the Discover Portal… button again.
5. “Discover Target Portal” dialog appears.
Type in the first IP address of the partner node you will use to connect the secondary mirrors
of the HA devices. Click Advanced.
6. Select Microsoft ISCSI Initiator as your Local adapter, select the Initiator IP in the same subnet as the IP address on the partner server from the previous step.
Click OK. Then click OK again to complete the Target Portal discovery.
7. Click the Discover Portal… button again
8. Discover Target Portal dialog appears. Type in the IP address of the partner node you will use to connect the parent node of the HA devices to.
9. Click Advanced.
10. All target portals have been added on the first node.
11. Complete the same steps for the second node.
12. All target portals have been added on the second node
1. Click the Targets tab. The previously created targets are listed in the Discovered Targets section.
NOTE: If the created targets are not listed, check the firewall settings of the StarWind Server as well as the list of networks served by the StarWind Server (go to StarWind Management Console -> Configuration -> Network).
2. Select a target of the witness located on the local server and click Connect.
3. Select Microsoft iSCSI Initiator in the Local adapter text field.
Select 127.0.0.1 in the Target portal IP.
Click OK. Then click OK again.
Do not connect the partner-target for the Witness device from the other StarWind node.
4. Select another target located on the local server and click Connect.
5. Select Microsoft iSCSI Initiator in the Local adapter text field. Select 127.0.0.1 in the Target portal IP.
Click OK. Then click OK again.
6. Select the partner-target from the other StarWind node and click Connect.
7. Select Microsoft iSCSI Initiator in the Local adapter text field. In the Initiator IP field, select the IP address for the ISCSI channel. In the Target portal IP, select the corresponding portal IP from the same subnet.
Click OK. Then click OK again.
8. Repeat the actions described in the steps above at the local StarWind node for all HA devices. The result should look like the screenshot below
9. Repeat all the steps of this section on the StarWind node in Azure, specifying corresponding local and data channel IP addresses. The result should look like the screenshot below.
1. Configure the MPIO policy for each device specifying localhost (127.0.0.1) as the active path. Select a target located on the local server and click Devices.
2. The Devices dialog appears. Click MPIO.
3. Select Fail Over Only as a load balance policy and then designate the local path as active.
4. You can check that 127.0.0.1 is the active path by selecting it from the list and clicking Details.
5. Repeat the same steps on the second node.
6. Initialize the disks and create partitions on them using the Computer Management snapin. It is required that the disk devices are visible on both nodes to create the cluster.
NOTE: it is recommended to initialize the disks as GPT.
Creating Failover Cluster
1. Open Server Manager. Select the Failover Cluster Manager item from the Tools menu.
2. Click the Create Cluster link in the Actions section of the Failover Cluster Manager.
3. Specify the servers to be added to the cluster
Click Next to continue.
4. Validate the configuration by passing the cluster validation tests: select “Yes…”
Click Next to continue.
5. Once the validation is done, click Finish.
6. Specify a cluster name and IP addresses for two subnets: On-Premises and Azure networks.
Click Next to continue.
7. Make sure that all the settings are correct. Click Previous button to make any changes.
NOTE: If checkbox “Add all eligible storage to the cluster” is selected, the wizard will try to add all StarWind devices to the cluster automatically. The smallest device will be assigned as Witness.
8. The process of the cluster creation starts.
After it is completed, the system displays a report with detailed information
Click Finish to close the wizard.
Adding Cluster Shared Volumes
To add Cluster Shared Volumes (CSV) it is necessary to work with Hyper-V virtual machines:
1. Open Failover Cluster Manager.
2. Go to Cluster->Storage -> Disks.
3. Click Add Disk in the Actions panel, choose StarWind disks from the list, and click OK.
4. To configure a Witness drive, right-click the Cluster->More Actions->Configure Cluster Quorum Settings, follow the wizard, and use a default quorum configuration.
NOTE: to avoid unnecessary CSV overhead, configure each CSV to be owned by one cluster node. This node should also be the preferred owner of the VMs running on that node.
5. Right-click the required disk and select Add to Cluster Shared Volumes.
Once the disks are added to the cluster shared volumes list, you can start creating highly available virtual machines on them.
NOTE: to avoid the unnecessary CSV overhead, configure each CSV to be owned by one cluster node. This node should also be the preferred owner of the VMs running on that node.
6. Check the Server Name’s Dependencies. For two different subnets, ‘OR’ dependency type must be configured for the failover between locations.
1. Create a VM(s) on a cluster node. Use CSV to house the VM.
2. Open VM settings to enable Live Migration between nodes with different processorversions.
3. Install the OS on the VM.
4. Perform Live Migration and VM failover between On-Premises and Azure nodes.
The configuration of StarWind Hybrid Cloud for Azure is done and is ready for the production