Internet Small Computer System Interface (iSCSI) was implemented as a protocol about 20 years ago, but iSCSI-based storage systems still offer many bull points for SMBs. Since standard SCSI and TCP/IP protocols lie at the core of iSCSI SANs, their building is an inexpensive and financially viable solution. Incorporating iSCSI into existing Ethernet network, the attractiveness of iSCSI SAN increases due to hardware agnostic composite: there is no need to buy additional specialized adapters required for Fibre Channel SAN deployments. Building iSCSI SAN is still a commodity. And, what can give it a supreme value is StarWind iSCSI Accelerator/ Load Balancer, a free lightweight tool for balancing iSCSI sessions between all the available server CPU cores.
The desire to build iSCSI SAN and increase Ethernet bandwidth will certainly lead to a single solution – an increase in the number of iSCSI sessions. The iSCSI protocol itself is hypersensitive to overloads and does not tolerate any latency. At the same time, high latency and, as a result, lower performance will certainly occur with an increase in the number of iSCSI sessions. At face value, Microsoft iSCSI Initiator would have to solve this problem. But there is one catch. Microsoft iSCSI Initiator doesn’t keep up with server compute power growth. iSCSI was designed when there were maximum 2 physical cores per CPU, so it can balance workloads only between 2 cores. But the potential of modern servers goes far beyond the 2 cores as well as the number of iSCSI sessions needed for the highest possible Ethernet bandwidth.
Figure 1 – Microsoft iSCSI Initiator cannot distribute virtualized workloads.
Modern servers have 2 CPU sockets that have multiple physical cores. Each physical core, in turn, can be split into 2 logical ones, something that iSCSI Initiator doesn’t get! Every time an iSCSI session starts, it still gets assigned to one of 2 cores. Such load distribution leads to situations when you have a one or 2 cores overwhelmed while others are idle.
All this being said, the traditional iSCSI implementation in Hyper-V environments limits the overall infrastructure performance given that your VMs utilize only 2 cores. As a result, you are forced to buy new hardware every time your applications need some more IOPS. For SMBs, it isn’t a smart investment. So, admins often need to fine-tune Microsoft iSCSI Initiator such that VMs get enough IOPS. And, it is a rather annoying task. So, having a load balancer may save you both hassle and money.
StarWind iSCSI Accelerator is a free tool for balancing virtualized workloads between all CPU cores in Hyper-V servers. The Accelerator is implemented as a filter driver between Microsoft iSCSI Initiator and the hardware presented over the network. Every time a new TCP session is created, it is assigned to a free CPU core. Distributing workloads in such way the solution ensures smart compute resource utilization: no cores are overwhelmed while others idle.
Using this technology, you can be sure that Ethernet bandwidth will increase as well as the overall performance. For example, by defining 20 iSCSI sessions, the performance of all CPU cores will be used uniformly, and latency approaches zero.
Figure 2 – StarWind iSCSI Accelerator in action. Balancing virtualized workloads between all CPU cores.
Unlike hardware accelerators, the solution on offer can be installed on any Windows server, without any additional proprietary equipment.
Finally, StarWind iSCSI Accelerator is pretty intuitive and simple to use. Of course, you can keep fine-tuning Microsoft iSCSI Initiator, but it is a tedious process very few have time for. StarWind iSCSI Accelerator, however, is a “plug-and-play” solution: it is good to go from the moment it’s deployed.
StarWind iSCSI Accelerator is a free solution properly for balancing iSCSI sessions between multiple server CPU cores. StarWind iSCSI Accelerator works in unison with Microsoft iSCSI Initiator and iSCSI SAN of any vendor. This lightweight tool maximizes utilization of compute resource of your hardware, providing your VMs with the IOPS they need.