Page 2 of 2

Re: Current HCI industry highest performance: 26M IOPS questions

Posted: Sat Jul 24, 2021 4:22 pm
by jdeshin
Dear Yaroslav, please, check that I understand correctly:
In normal case we have following situation:
normal.jpg
normal.jpg (53.12 KiB) Viewed 2417 times
node1 and node2 have local connections (127.0.0.1) and partner iscsi connection.
nodeN does not host array, therefore it connected only by iscsi.
When sync switch goes down, StarWind assigned one node as "main" and disable iscsi target on not "main" node.
So, we have following result:
link.jpg
link.jpg (54.6 KiB) Viewed 2417 times
After the sync switch goes on, we will need to resync volume from "main" node.
When iscsi switch goes down, we will lost any iscsi communications, except local connections on nodes node1 and node2. The CSV on the nodeN goes unavailable and WSFC have to move VM to node, that have local link with the storage.
iscsi.png
iscsi.png (28.08 KiB) Viewed 2417 times
Is it correct?

Best regards,
Yury

Re: Current HCI industry highest performance: 26M IOPS questions

Posted: Thu Jul 29, 2021 7:43 am
by yaroslav (staff)
Hi,

I learned that there are actually 2x PCIe 3.0 x16- each had its own NUMA.
Slots 1 and 5 are PCIe 3.0 x16
You need to have redundant switches for iSCSI. If a switch fails, another takes over.
In small deployments, where you have a point-to-point connection for Sync, you can use Sync for cluster-only communication. As I mentioned earlier, that should prevent cluster a split-brain. Alternatively, please connect Sync ports to both redundant switches, use the same subnet for them, probably, and set that cluster network for cluster-only communication.

Re: Current HCI industry highest performance: 26M IOPS questions

Posted: Sun Aug 01, 2021 3:43 pm
by jdeshin
Hi Yaroslav,
So, one line of PCIe 3.0 have 8GT/s, x8 lanes have 7,88 GBytes/s ~64 Gbit/s and x16 lanes - 15,8 GBytes/s ~128 Gbit/s. You used two dual-port 100 Gbit/s cards. Therefore, if you plug your dual-port card in x16 line connector then you can get only 2x50 gbit/s full duplex or 2x100 gbit/s half duplex.
And my question was: why do you use 100 gbit/s cards?
As I can understand you didn't have any network redundancy in the test. Is it true?

Best regards,
Yury

Re: Current HCI industry highest performance: 26M IOPS questions

Posted: Tue Feb 27, 2024 10:39 am
by anton (staff)
1) You're absolutely correct, it's PCIe bus which is a limiting factor. We use 100 GbE NICs just because we need to unify our BoMs and stick with the smallest amount of the varying hardware components as possible. Software-only customers don't have this issue.

2) Nope, it's not what you should be pushing into production.

Re: Current HCI industry highest performance: 26M IOPS questions

Posted: Tue Feb 27, 2024 10:39 am
by anton (staff)
== thread closed ==