Synology + StarWind + Hyper-V

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
enric
Posts: 5
Joined: Fri Oct 13, 2017 10:22 am
Location: Barcelona

Fri Oct 13, 2017 10:53 am

Hi,

We currently have a 3 node Hyper-V Failover Cluster. Each node is connected with a 1Gb LAN to the office network, and with a second 1 Gb LAN to an isolated network, using iSCSI, to a Synology DS1813+, with 6x4TB drives in RAID10 and 2x256GB SSD as SSD cache. This setup is working fine, and the performance is not bad, but I know it could be better.

Now I want to add a second Synology, same model and disks, to add storage redundancy in case of a hardware fail. I was almost convinced to create a Synology HA Cluster, although I don't like that having a passive server, doing nothing until something bad happens to the active one.

Then I discovered the new Microsoft Storages Spaces Direct, but then I discovered that I need each node to have a Datacenter license.

Finally I discovered StarWind Virtual SAN. Lot of good reviews. And I found this series of articles:
https://www.starwindsoftware.com/blog/s ... irtual-san
https://www.starwindsoftware.com/blog/s ... ile-system
https://www.starwindsoftware.com/blog/s ... r-duration

These articles are more or less what I want to do, but before investing in the (fairly large) amount of time I will need to rebuild our setup, I want to ask if my idea is good enough, or if I need to change anything:

1. Setup both Synologys as they're now: 6x4TB HDD in RAID10 + 2x256GB SSD as SSD Cache. 2 iSCSI-LUN (Block-Level), one for Data and another for Quorum.
2. Connect each of them to one of the nodes, using as many direct connections (no switches) as I can, 4 if possible. I'll need to buy a couple of 4x 1Gbe network cards.
3. Connect both storage nodes using a single 10Gbe connections. I'll need to buy a 10Gbe network card, and a direct cable between them. I dont know if RJ45 or SFP+.
4. Enable iSCSI Multipath IO, so I can reach a "theorical" 8x1Gb connection between each node and the iSCSI data. Synologys aren't going to be able to serve data at this speed.
5. Mount StarWind HA device using iSCSI: using loopback for the local connection and the 10Gbe connection to the remote node.

My doubts are:
- Is it better to use the Synology SSD Cache, or use the SSD for something else, like StarWind L2 Cache or using them on some old workstation with regular HDD?
- Do you think using a 4 connections between the Synologys and the nodes is too much? I think than using a 3 connection, using one of the integrated NICs on the server and a 2 NICs card could be enough (and cheaper).
- Could it be better to mount the StarWind HA device using SMB3? I think this is also supported, but I may be wrong.
- Will I be able to use the third Hyper-V node if it has no iSCSI connection?

The Hyper-V Failover Cluster is currently hosting about 10 VMs, used mostly for development, with currently about 4GB of required storage data. There are some MySQL, Subversion, build server, web servers, CRM, etc. I know I don't need a super-fast setup, but I want to use what I currently have as better as possible.

Thanks,

Enric
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Fri Oct 13, 2017 1:10 pm

Enric,

After having studied your description of the current and the intended setup, some moments remained unclear, especially the network connections. Can I ask you to draw two network diagrams - the current setup and the intended one? Please mark the network connections between all instances in accordance with their intended use. It should not be a piece of art, just a scheme so that I could get your idea in a clearer way.
enric
Posts: 5
Joined: Fri Oct 13, 2017 10:22 am
Location: Barcelona

Sat Oct 14, 2017 12:09 pm

Hi Boris,

Here are the images:

1. The current setup with 1 Synology and 3 Hyper-V nodes
Current setup
Current setup
current.PNG (89.75 KiB) Viewed 4565 times
2. The new setup with 2 Synologys and 2 nodes (Virtual SAN + Hyper-V)
New setup (2 nodes)
New setup (2 nodes)
new-2-node.PNG (92.58 KiB) Viewed 4565 times
3. The new setup with 2 Synolgoys and 3 nodes (2 Virtual SAN + 3 Hyper-V):
New setup (3 nodes)
New setup (3 nodes)
new-3-nodes.PNG (101.5 KiB) Viewed 4565 times
Do you think that is possible?

Thanks,

Enric
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Mon Oct 16, 2017 3:17 pm

Enric,

Thank you for clear diagrams. This makes it much more easier to understand your intended configuration.
So, let's consider 2-node and 3-node scenario separately.

2 nodes:
Everything is fine except for you will need to buy not a single-port 10 Gbit card, but a dual port 10 Gbit card. The reason for this is that you will need to have a separate link for Synchronization channel (StarWind only) and another one for iSCSI (data read-write operations for clients). Except for this, everything is reasonable there. For Heartbeat, you can use the management network as well.

3 nodes:
The depicted setup is also possible. In this case the cross-connection link should be between node 1 and node 2, as you showed it, and the 3rd node will be connected using a 10 Gbit switch. The same switch-based connection will serve as iSCSI link for all servers, too.

My only concern is the way you are going to manage the Synology boxes, as they have no direct connection to the Production network switches. But if you are fine with managing the boxes from your servers, I would not object.
enric
Posts: 5
Joined: Fri Oct 13, 2017 10:22 am
Location: Barcelona

Mon Oct 16, 2017 3:34 pm

Hi Boris,

Thanks for your response.
Boris (staff) wrote: 2 nodes:
Everything is fine except for you will need to buy not a single-port 10 Gbit card, but a dual port 10 Gbit card. The reason for this is that you will need to have a separate link for Synchronization channel (StarWind only) and another one for iSCSI (data read-write operations for clients). Except for this, everything is reasonable there. For Heartbeat, you can use the management network as well.
Not a problem, as I already though getting a dual card just in case I want to upgrade to a 3-node setup. I think I'm going to get a couple of Intel X540-T2.
Boris (staff) wrote: 3 nodes:
The depicted setup is also possible. In this case the cross-connection link should be between node 1 and node 2, as you showed it, and the 3rd node will be connected using a 10 Gbit switch. The same switch-based connection will serve as iSCSI link for all servers, too.
That's exactly what I showed on the pìcture.
Boris (staff) wrote: My only concern is the way you are going to manage the Synology boxes, as they have no direct connection to the Production network switches. But if you are fine with managing the boxes from your servers, I would not object.
Yes, you're right. I think there are some soluitons:
1. Add routing features in the servers so I can access the Synolgoy from the network. Too complicated in my opinion.
2. Use just 3 direct connections between the servers and the Synologys. Then, connect the four LAN to the production network, so I can manage them, and more important, they could notify me by email if some important event (like a hard drive failure) happens. Moreover, I think as the servers are also connected to the production network, I can add this iSCSI path also, and still has a 4x1GbE connection between the Synologys ans the servers. Anyway, I still don't think the Synologys will be able to saturate the 3x1GbE data.

There are a couple of questions that I don't know:
- what is going to give me more perfomance and fatures: using the SSD on the Synologys as Synology SSD Cache or use them in the servers as StarWind Virtual SAN L2 Cache?
- could I share the StatWind Virtual SAN device as SMB3 instead of iSCSI?

Thanks again,

Enric
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Tue Oct 17, 2017 10:51 am

Enric,

As for your first question, I cannot confirm that we have ever tested that particular device, this is why I can only suggest you trying its performance in your scenario. Otherwise, you can always switch to using StarWind L2 cache afterwards.
Concerning the second question, I would encourage you to have a look a the following article https://www.starwindsoftware.com/blog/f ... erver-2016 that tells you how to implement SMB3 share in Microsoft Failover Cluster.
Post Reply