Posted by Boris Yurchenko on June 12, 2018
Basic Hyper-V virtual NIC management

Let’s be honest, any system administrator may face the need of hot-adding the network interfaces to the guest VMs in his Microsoft Hyper-V environment one day. And that’s no problem as Windows Server 2016 brought in a whole set of useful features, one of which is the ability to add and remove network adapters on the running VMs. Moreover, you can do that in two ways – GUI, if you’re looking for a straightforward process and PowerShell if you are a fan of automation.

Learn More

Posted by Alex Khorolets on April 3, 2018
Windows Server 2016 Core configuration. Part 3: Failover Clustering

wp-image-7719

Looking back at the previous articles in our “How-to-Core basics”, we have managed to install the Core version of Windows Server 2016. As well, the required networks were set, and the storage for the virtual machines was created.

In the final part of the trilogy, I’ll cover the steps left to prepare the environment in order to make your production highly available and fault-tolerant.

Being short, last time, we were up to installing Windows Server Core version on a single server and adding the storage as an iSCSI target. Highly available and fault-tolerant storage requires another server to create the failover cluster. There’s not much difference between the required configuration and the steps we did previously.

Learn More

Posted by Boris Yurchenko on February 20, 2018
Don’t break your fingers with hundreds of clicks – automate Windows iSCSI connections

If you have a single environment with only several iSCSI targets discovered from a couple of target portals, messing with automation may not be worth it. Yet, if you have multiple environments with a bunch of portals and targets that need to be discovered and connected, and all of them are more or less similar in terms of configuration, you might find your resort in automating the whole process.

I hope to post some other automation things here, so tune in and check the StarWind blog from time to time.

Learn More

Posted by Taras Shved on December 27, 2017
Intel SPDK NVMe-oF Target Performance Tuning. Part 2: Preparing testing environment

Introduction

In the previous article, I’ve described 3 scenarios for testing NVMe-oF performance and skimmed through their hardware and software configuration. Again, what I wanna do, is measure how virtualization influences the NVMe-oF performance (maybe, it doesn’t at all). For this, I’m gonna examine how NVMe-oF performs on a bare metal configuration, and on an infrastructure with Hyper-V and ESXi deployed. In each case, I’ll also evaluate the performance of iSER transport using LIO and SPDK iSCSI. Now that you have the overall understanding of the project, it’s time to move on to configuring our testing environment.

NVMe-oF on bare metal

Learn More

Posted by Taras Shved on December 20, 2017
Intel SPDK NVMe over Fabrics [NVMe-oF] Target Performance Tuning. Part 1: Jump into the fire©

There’s a common opinion that the performance in general and IOPS-intensive performance like NVMe over Fabrics is usually lower in virtualized environments due to the hypervisor overhead. Therefore, I’ve decided to run a series of tests to prove or knock down this belief. For this purpose, I’ll have three scenarios for measuring the performance of NVMe over Fabrics in different infrastructures: fist – on a bare metal configuration, second – with Microsoft Hyper-V deployed on the client server, and finally, with ESXi 6.5.

In each case, I’ll also compare the performance of NVMe-oF, NVMe-oF over iSER transport, and SPDK iSCSI.

NVMe over Fabrics on the client server

Learn More

Posted by Alex Khorolets on November 14, 2017
StarWind iSER technology support

In the modern IT world, almost every tech guy, no matter a systems administrator or an engineer, wants his environment to show the best results that can be squeezed out of the hardware. In this article, I want you to take a look at the StarWind support of an iSER technology which stands for the iSCSI Extensions for RDMA.

There’s not much of a change in the overall system configuration. iSER is utilizing the common iSCSI protocol by using the RDMA transport service that can be used on some network adapters with hardware offload capability. This means that iSER can supply higher bandwidth, intended for large transfers of block storage data.

RDMA feature

Learn More

Posted by Andrea Mauro on October 10, 2017
The dark side of converged storage networks

The fabric of SAN (means Storage Area Network) with Fibre Channel solutions have always been a dedicated network, with dedicated components (like FC switches).

But, starting with iSCSI and FCoE protocols, the storage fabric could now be shared with the traditional network infrastructure, because at least level 1 and 2 have a common Ethernet layer (for iSCSI also layer 3 and 4 are the same of TCP/IP networks).

Hosts (the initiators) in a converged network use typically Converged Network Adapters (CNAs) that provide both Ethernet and storage functions (usually FCoE and iSCSI). The results are that LAN and SAN are shared on the same physical network:

LAN and SAN shared on the same physical network

Learn More

Posted by Alex Khorolets on July 14, 2017
Windows Server 2016 Core configuration. Part 1: step-by-step installation

This series of articles will guide you through the basic deployment of Microsoft Windows Server 2016 Core version, covering all the steps from an initial installation to the deployment of Hyper-V role and Failover Cluster configuration.

The first and the main thing you need to double-check before installing the Windows Server 2016 Core is whether your hardware meets the system requirements of WS 2016. This also is very important in the process of planning your environment, in order to be sure that you have enough amount of compute resources for running your production workload.

Windows Server installation

Learn More

Posted by Dmytro Khomenko on April 28, 2017
Integrating StarWind Virtual Tape Library (VTL) with Microsoft System Center Data Protection Manager

The reason for writing this article was the goal of eliminating any possible confusion in the process of configuring the StarWind Virtual Tape Library in pair with the Microsoft System Center Data Protection Manager.

The integration of SCDPM provides a benefit of consolidating the view of alerts across all your DPM 2016 servers. Alerts are grouped by disk or tape, data source, protection group and replica volumes, which simplifies troubleshooting. The grouping functionality is further completed with the console capable of separating issues that only affect one data source from problems that impact multiple data sources. Alerts are also separated into backup failure or infrastructure problems.

StarWind VTL configuration with SCDPM

Learn More

Posted by Vladislav Karaiev on February 17, 2017
Storage HA on the Cheap: Fixing Synology DiskStation flaky Performance with StarWind Free. Part 3 (Failover Duration)

We are continuing our set of articles dedicated to Synology’s DS916+ mid-range NAS units. Remember we don’t dispute the fact that Synology is capable of delivering a great set of NAS features. Instead of this, we are conducting a number of tests on a pair of DS916+ units to define if they can be utilized as a general-use primary production storage. In Part 1 we have tested the performance of DS916+ in different configurations and determined how to significantly increase the performance of a “dual” DS916+ setup by replacing the native Synology DSM HA Cluster with StarWind Virtual SAN Free.

Synology DS916 and StarWind

Learn More