How to configure a Multi-Resilient Volume on Windows Server 2016 using Storage Spaces
Posted by Vitalii Feshchenko on October 24, 2017
5/5 (3)

Introduction

Plenty of articles have been released about Storage Spaces and everything around this topic. However, I would like to absorb all actual information and lead you through the journey of configuring Storage Spaces on a Standalone host.

The main goal of the article is to show a Multi-Resilient Volume configuration process.

How it works

In order to use Storage Spaces, we need to have faster (NVMe, SSD) and slower (HDD) devices.

So, we have a set of NVMe devices along with SAS HDD or SATA HDD, and we should create performance and capacity tier respectively.

NVMe tier is used for caching. When hot blocks are written to the storage array, they are written to the caching tier first (SSD’s or NVMe):

Data in Performance Tier

(more…)

Please rate this

SMB Direct in a Windows Server 2016 Virtual Machine Experiment
Posted by Didier Van Hoye on October 12, 2017
5/5 (1)

Introduction

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Traditionally, in order for SMB Direct to work, the SMB stack needs direct access to the RDMA NICs. This means that right up to Windows Server 2012 R2 we had SMB Direct on running on physical NICs on the host or the parent partition/management OS. You could not have RDMA exposed on a vNIC or even on a host native NIC team (LBFO). SMB Direct was also not compatible with SR-IOV. That was and still is, for that OS version common knowledge and a design consideration. With Windows Server 2016, things changed. You can now have RDMA exposed on a vSwitch and on management OS vNICs. Even better, the new Switch Embedded Teaming (SET) allows for RDMA to be exposed in the same way on top of a vSwitch. SET is an important technology in this as RDMA is still not exposed on a native Windows team (LBFO).

Mellanox InfiniBand Router

(more…)

Please rate this

Google Cloud Trying to Catch Up: NVIDIA GPUs and Discounts for Virtual Machines
Posted by Augusto Alvarez on October 3, 2017
5/5 (1)

We’ve reviewed several times about the large supremacy around AWS and Azure regarding cloud services market share (more details about recent surveys can be found here: “AWS Bigger in SMBs but Azure is the Service Most Likely to Renew or Purchase”) and Google Cloud lands in third place for most of the services.  Now they are implementing new NVIDIA GPUs for their virtual machines and sustained discounts for customers using the new NVIDIA VMs.

NVIDIA GPUs for Google Compute Engine

(more…)

Please rate this

Hyperconvergence – another buzzword or the King of the Throne?
Posted by Ivan Talaichuk on September 7, 2017
5/5 (2)

Before we have started our journey through the storage world, I would like to begin with a side note on what is hyperconverged infrastructure and which problems this cool word combination really solves.

Folks who already took the grip on hyperconvergence can just skip the first paragraph where I’ll describe HCI components plus a backstory about this tech.

Hyperconverged infrastructure (HCI) is a term coined by two great guys: Steve Chambers and Forrester Research (at least Wiki said so). They’ve created this word combination in order to describe a fully software-defined IT infrastructure that is capable of virtualizing all the components of conventional ‘hardware-defined’ systems.

Hyperconverged system

(more…)

Please rate this

Ceph-all-in-one
Posted by Alex Bykovskyi on August 16, 2017
No ratings yet.

Introduction

This article describes the deployment of a Ceph cluster in one instance or as it’s called “Ceph-all-in-one”. As you may know, Ceph is a unified Software-Defined Storage system designed for great performance, reliability, and scalability. With the help of Ceph, you can build an environment with the desired size. You can start with a one-node system and there are no limits in its sizing. I will show you how to build the Ceph cluster on top of one virtual machine (or instance). You should never use such scenario in production, only for testing purposes.

The series of articles will guide you through the deployment and configuration of different Ceph cluster builds.

 

check the ceph cluster status

(more…)

Please rate this

3 Generations of My Homelabs
Posted by Askar Kopbayev on August 15, 2017
No ratings yet.

Sooner or later every single IT guy comes to the idea of having some lab. There are a million reasons why you would need a lab: learning new technologies, improving skills, trying crazy ideas you would never dare to try in the production network, you name it. Even though it is a work-related activity for most home labbers this is just another hobby for many of us.  That’s why people spend so many hours of their personal time building the homelab, investing significant funds into new hardware, thoroughly planning its setup, looking for a help in online communities or sharing their experience to help others. There is a whole universe of home labbers and I am happy to be part of this community.

In this post, I would like to share my experience with 3 generations of home labs I have had so far and the thoughts about next generation.

high-level network diagram

(more…)

Please rate this

VMware vCenter Server Appliance Homelab tips
Posted by Vladan Seget on June 15, 2017
No ratings yet.

Many IT administrators or virtualization guys runs their homelabs at home. It is a good way to learn new technologies, be able to break things in a lab to get stronger skills.

It is sometimes a challenge, to squeeze as much RAM as possible from it. The main challenge is always a memory utilization. VMware VMs are getting memory hungry all the time and they are not “optimized” for Homelab use, but rather for production environments. Yes, it is the main purpose of those VMs after all.

One of the large VMs, but most critical, is VMware vCenter Server Appliance (VCSA). This product is becoming very popular within VMware communities and it is very easy to setup. Today we will have a look if we can do some optimizations and some “tweaks” to make it less memory hungry.

Set service startup manual

(more…)

Please rate this

Design a ROBO infrastructure. Part 4: HCI solutions
Posted by Andrea Mauro on June 7, 2017
5/5 (1)

2-nodes hyperconverged solution

As written in the previous post, for ROBO scenario the most interesting HCI (Hyper-Converged Infrastructure) configuration is a two nodes configuration, considering that two nodes could be enough to run dozen VMs (or also more).

For this reason, not all hyperconverged solutions could be suitable for this case (for example Nutanix or Simplivity need at least 3 nodes). And is not simple scale down an enterprise solution to a small size, due to the architecture constraints.

Actually, there are some interesting products specific for HCI in ROBO scenario:

  • VMware Virtual SAN in a 2 nodes clusters
  • StarWind Virtual Storage Appliance
  • StorMagic SvSAN

StarWind Virtual SAN overall architecture

(more…)

Please rate this

Why do we always see Responder CQE Errors with RoCE RDMA?
Posted by Didier Van Hoye on June 2, 2017
5/5 (1)

Why do we always see Responder CQE Errors with RoCE RDMA?

Anyone who has configured and used SMB Direct with RoCE RDMA Mellanox cards appreciates the excellent diagnostic counters Mellanox provides for use with Windows Performance Monitor. They are instrumental when it comes to finding issues and verifying everything is working correctly.

Many have complained about the complexity of DCB configuration but in all earnest, any large network under congestion which needs specialized configurations has challenges due to scale. This is no different for DCB. You need the will to tackle the job at hand and do it right. Doing anything at scale reliable and consistent means automating it.  Lossless Ethernet, mandatory or not, requires DCB to shine. There is little other choice today until networking technology & newer hardware solutions take an evolutionary step forward. I hope to address this in a future article. But, this I not what we are going to discuss here. We’ve moved beyond that challenge. We’ll talk about one of the issues that confuse a lot of people.

Responder CQE errors report after virtual machines migration from Hyper-V cluster

(more…)

Please rate this

Integrating StarWind Virtual Tape Library (VTL) with Microsoft System Center Data Protection Manager
Posted by Dmytro Khomenko on April 28, 2017
5/5 (5)

Introduction

The reason for writing this article was the goal of eliminating any possible confusion in the process of configuring the StarWind Virtual Tape Library in pair with the Microsoft System Center Data Protection Manager.

The integration of SCDPM provides a benefit of consolidating the view of alerts across all your DPM 2016 servers. Alerts are grouped by disk or tape, data source, protection group and replica volumes, which simplifies troubleshooting. The grouping functionality is further completed with the console capable of separating issues that only affect one data source from problems that impact multiple data sources. Alerts are also separated into backup failure or infrastructure problems.

StarWind VTL configuration with SCDPM

(more…)

Please rate this