Windows Server 2016 Core configuration. Part 3: Failover Clustering
Posted by Alex Khorolets on April 3, 2018


Looking back at the previous articles in our “How-to-Core basics”, we have managed to install the Core version of Windows Server 2016. As well, the required networks were set, and the storage for the virtual machines was created.

In the final part of the trilogy, I’ll cover the steps left to prepare the environment in order to make your production highly available and fault-tolerant.

Being short, last time, we were up to installing Windows Server Core version on a single server and adding the storage as an iSCSI target. Highly available and fault-tolerant storage requires another server to create the failover cluster. There’s not much difference between the required configuration and the steps we did previously.

Learn More

Network File System: access your files remotely as easily as if they were local
Posted by Alex Khorolets on November 30, 2017

Why do I need to use complicated ways to access my files that are located on company’s server or in my homelab, for example? I want to ask the same question in order to make remote files available for my local applications without any extra actions. The answer, as well as the solution to the problems listed above, lies in the next four words – Network File System protocol.

I’d like to start with the general description of the NFS technology and some background about its purpose, and how it was created. The story goes back to middle 80’s when, alongside with the Van Halen’s new “1984” album, the company named Sun Microsystems created a Network File System protocol. It allowed users to access some files from the servers over a network, just like if these files were located on users’ machines.

Since that time, there were several versions of the NFS protocol released. Originally, the protocol was operating over UDP till NFSv3 update, in which TCP was added as a transport service. That allowed transferring blocks of a larger size which was limited by UDP before. The latest versions of the NFS protocol, including v4, v4.1, and v4.2, were developed by another company named Internet Engineering Task Force (IETF). They include performance increase, multiple security updates, and scalability.

NFS file server configuration

Learn More

StarWind iSER technology support
Posted by Alex Khorolets on November 14, 2017

In the modern IT world, almost every tech guy, no matter a systems administrator or an engineer, wants his environment to show the best results that can be squeezed out of the hardware. In this article, I want you to take a look at the StarWind support of an iSER technology which stands for the iSCSI Extensions for RDMA.

There’s not much of a change in the overall system configuration. iSER is utilizing the common iSCSI protocol by using the RDMA transport service that can be used on some network adapters with hardware offload capability. This means that iSER can supply higher bandwidth, intended for large transfers of block storage data.

RDMA feature

Learn More

StarWind Cloud VTL for AWS and Veeam
Posted by Alex Khorolets on August 9, 2017


These days, almost every company that stores its own data on-premise, will it be the virtual machines, databases or just plain files, needs to be sure that this data won’t be lost due to some accident or human error. More to say, backup solutions are usually requested to be flexible, feature-rich and utilize different storage-optimization technologies for increased cost-efficiency.

The additional requirement of the long-term data retention significantly increases the overall cost of the final solution, which resulted in an increased demand for public clouds like Amazon Web Services and Azure as a backup repository for their data.

StarWind Cloud VTL for AWS and Veeam creates an additional backup storage tier, fulfilling the “3-2-1 backup” strategy, and gives the ability to send older backups to AWS S3 and Glacier. Virtual tape backups are also self-protected against ransomware since all data is being sealed into “containers” that cannot be directly affected by the malicious software.

StarWind Cloud VTL configuration

Learn More

Windows Server Core configuration. Part 2: Hyper-V role installation
Posted by Alex Khorolets on July 25, 2017

In the previous article, we have covered the basics of Microsoft Windows Server Core installation. After configuring the operating system and specifying the networks and storage for the future configuration, there are few more things left.

Our next step is to install and configure the Hyper-V role.

Installation of the Hyper-V role by itself is extremely simple. You need to open the PowerShell window by typing in “Powershell” command in the command prompt. In order to install the Hyper-V role through the PowerShell, enter the following:

Installing the Hyper-V role

Learn More

Windows Server 2016 Core configuration. Part 1: step-by-step installation
Posted by Alex Khorolets on July 14, 2017

This series of articles will guide you through the basic deployment of Microsoft Windows Server 2016 Core version, covering all the steps from an initial installation to the deployment of Hyper-V role and Failover Cluster configuration.

The first and the main thing you need to double-check before installing the Windows Server 2016 Core is whether your hardware meets the system requirements of WS 2016. This also is very important in the process of planning your environment, in order to be sure that you have enough amount of compute resources for running your production workload.

Windows Server installation

Learn More

Virtual Tape Library on Azure used with Microsoft System Center Data Protection Manager 2016
Posted by Alex Khorolets on May 22, 2017

Tapes have been on the “backup market” for a long time and still are considered as a good option to store and secure a high amount of backup data. However, the continuous increase of backup size becomes a bottleneck in terms of fitting backup windows and reduces a simplicity and redundancy of backup solutions. Today more and more companies consider backing up their production data to more reasonable storage solutions, like Clouds.

Here comes in hand the technology that emulates physical tapes on top of inexpensive, fast, and high-capacity spindle drives.

System Center 2016 DPM Administrator Console

Learn More

Supermicro SuperServer E200-8D/E300-8D review
Posted by Alex Khorolets on April 21, 2017

These days, more and more companies need high-quality, reliable and efficient server hardware.  Home labs used by enthusiasts and professionals in the IT sphere for software developing and testing, studying for an IT certification, and configuring virtual environments became popular as well. Small companies are also interested in cheap and compact servers, the production of which is based on a couple of virtual machines or networking applications.

Supermicro company ranks one of the leading positions in server development for a long time. Supermicro products range from the Hi-End clusters to microservers. Recently the company released two compact servers: SuperServer E200-8D and its younger model – SuperServer E300-8D.

Supermicro SuperServers

Learn More

RAM Disk technology: Performance Comparison
Posted by Alex Khorolets on February 23, 2017


Since every computer now has a volatile amount of available storage located in the RAM, when compared to other direct-access memory used for data storage, for example, hard disks, CD-RWs, DVD-RWs and the older drum memory, the amount of time used to read/write the data differs in correspondence to the physical location and/or the medium used for reading/recording (rotation speeds and arm movement) the data.

The implementation of RAM as a storage provides a list of benefits over other conventional devices, due to the fact of the data being read or written in the same amount of time irrespective of the physical location of data inside the volume. Taken into consideration all the information mentioned above, it would be a crime not to take advantage of the provided conditions.


Learn More