Free Webinar
November 15 | 11am PT / 2pm ET
VMware & StarWind: Guarantee data safety and constant applications availability
Speaker: Alexey Khorolets, Pre-Sales Engineer, StarWind
Posted by Taras Shved on December 27, 2017
Intel SPDK NVMe-oF Target Performance Tuning. Part 2: Preparing testing environment

In the previous article, I’ve described 3 scenarios for testing NVMe-oF performance and skimmed through their hardware and software configuration. Again, what I wanna do, is measure how virtualization influences the NVMe-oF performance (maybe, it doesn’t at all). For this, I’m gonna examine how NVMe-oF performs on a bare metal configuration, and on an infrastructure with Hyper-V and ESXi deployed. In each case, I’ll also evaluate the performance of iSER transport using LIO and SPDK iSCSI. Now that you have the overall understanding of the project, it’s time to move on to configuring our testing environment.

Learn More

Posted by Didier Van Hoye on June 2, 2017
Why do we always see Responder CQE Errors with RoCE RDMA?

Anyone who has configured and used SMB Direct with RoCE RDMA Mellanox cards appreciates the excellent diagnostic counters Mellanox provides for use with Windows Performance Monitor. They are instrumental when it comes to finding issues and verifying everything is working correctly.

Learn More

Posted by Mikhail Rodionov on May 30, 2017
Windows Server 2016: NIC Teaming functionality

NIC teaming is not something we got with Windows Server 2016 but I just find it interesting to review this functionality as we have it in the current iteration of Windows Server, as usual, touching a bit on basics and history of this feature.

Learn More

Posted by Gary Williams on January 11, 2017
Exploring VMWare’s VPID Technology

I’ve been using VMWare’s VPID (Virtual Port ID) technology for some time now both in work and in the home lab but I was curious to see just how VMWare handled a NIC going down and then coming back up and it turned out to be a lot more powerful and smooth than I first thought.

Learn More

Posted by Romain Serre on September 23, 2016
Manage VM placement in Hyper-V cluster with VMM

The placement of the virtual machines in a Hyper-V cluster is an important step to ensure performance and high availability. To make a highly available application, usually, a cluster is deployed spread across two or more virtual machines. In case of a Hyper-V node is crashing, the application must keep working.

But the VM placement concerns also its storage and its network. Let’s think about a storage solution where you have several LUNs (or Storage Spaces) according to a service level. Maybe you have an LUN with HDD in RAID 6 and another in RAID 1 with SSD. You don’t want that the VM which requires intensive IO was placed on HDD LUN.

Storage Classification in Virtual Machine Manager

Learn More

Posted by Askar Kopbayev on August 11, 2016
Choosing ideal mini server for a home lab

Yesterday I saw a blog post in Homelab subreddit discussing what Intel NUC to choose. I have spent quite some time recently to choose the right server for my homelab expansion and I have considered a lot of options.

I was also looking at Intel  NUC as many other fellow IT professionals, but luckily last month I read on Tinkertry.com that Supermicro had just released new Mini-1U SuperServers – SYS-E300-8D and SYS-E200-8D.  I had some discussions with my colleagues and other people on Reddit and TinkerTry and I came to the conclusion that if you are aimed to run home lab for virtualization Intel NUC shouldn’t be considered. I believe SuperMicro is a new king on the market of mini servers for home lab.

SYS-E200-8D
Learn More

Posted by Askar Kopbayev on July 27, 2016
vSphere Replication traffic isolation

vSphere Replication has proved to be a great bonus to any paid vSphere license. It is an amazing and simple tool that provides cheap and semi-automated Disaster Recovery solution. Another great use case for vSphere Replication is migration of virtual machines.

vSphere Replication 6.x came with plenty of new useful features:

  • Network traffic compression to reduce replication time and bandwidth consumption
  • Linux guest OS quiescing
  • Increase in scalability – one VRA server can replicate up to 2000 virtual machines
  • Replication Traffic isolation – that is what we are going to talk today.

The goal of traffic separation is to enhance network performance by ensuring the replication traffic does not impact other business critical traffic. This can be done either by using VDS Network Input Output Control to set limits or shares for outgoing or incoming replication traffic. Another benefit of traffic isolation addresses security concern of mixing sensitive replication traffic with other traffic types.

the replication traffic flow
Learn More

Posted by Romain Serre on February 26, 2016
Deploy Hyper-V VM Switches and vNIC consistency with PowerShell

When a large Hyper-V infrastructure is deployed, often Virtual Machine Manager (VMM) is installed. It enables to deploy logical switches which are mainly VM Switches and virtual network adapters for Live-Migration, Backup, heartbeat, storage or management purposes. However small or medium business doesn’t necessarily implement VMM because it is expensive or not well-known. Even if you don’t use VMM, you can deploy consistency VM Switches on several Hyper-V by using PowerShell. This topic aims to show you how to deploy VM Switches and virtual network adapters by using PowerShell. In this way you will be able to make a standard script to configure your Hyper-V hosts.

Manage a teaming

The teaming may be necessary if you want a highly available environment

create teaming Learn More