Whip your Hyperconverged Failover Cluster into shape automatically and with no downtime using Microsoft’s Cluster Aware Updating
Posted by Paulsen Muzari on March 14, 2018

Some admins prefer the Cluster updates to be done automatically. To do so, Microsoft designed a feature to facilitate patching of Windows Servers from 2012 to 2016 that are configured in a failover cluster. Cluster Aware Updating (CAU) does this automatically, thereby avoiding service disruption for clustered roles.

In this article, we are going to take a look into how we can achieve this assuming that Cluster is built with hyperconverged scenario and StarWind Virtual SAN used as a shared storage. Before going in the steps to set the CAU, we will investigate this scenario.

(more…)

Back to Enterprise Storage
Posted by Jon Toigo on October 11, 2017

An under-reported trend in storage these days is the mounting dissatisfaction with server-centric storage infrastructure as conceived by proprietary server hypervisor vendors and implemented as exclusive software-defined storage stacks.  A few years ago, the hypervisor vendors seized on consumer anger around overpriced “value-add” storage arrays to insert a “new” modality of storage, so-called software-defined storage, into the IT lexicon.  Touted as a solution for everything that ailed storage – and as a way to improve virtual machine performance in the process – SDS and hyper-converged infrastructure did rather well in the market.  However, the downside of creating silo’ed storage behind server hosts was that storage efficiency declined by 10 percent or more on an enterprise-wide basis; companies were realizing less bang for the buck with software-defined storage than with the enterprise storage platforms they were replacing.

Shared and Silo'ed storages' components

(more…)

The Need For Liquidity in Data Storage Infrastructure
Posted by Jon Toigo on August 17, 2017

Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show.  As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used.  Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.”

When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability.  High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.

Hard disk drive cost per gigabyte

(more…)

Design a ROBO infrastructure. Part 4: HCI solutions
Posted by Andrea Mauro on June 7, 2017

2-nodes hyperconverged solution

As written in the previous post, for ROBO scenario the most interesting HCI (Hyper-Converged Infrastructure) configuration is a two nodes configuration, considering that two nodes could be enough to run dozen VMs (or also more).

For this reason, not all hyperconverged solutions could be suitable for this case (for example Nutanix or Simplivity need at least 3 nodes). And is not simple scale down an enterprise solution to a small size, due to the architecture constraints.

Actually, there are some interesting products specific for HCI in ROBO scenario:

  • VMware Virtual SAN in a 2 nodes clusters
  • StarWind Virtual Storage Appliance
  • StorMagic SvSAN

StarWind Virtual SAN overall architecture

(more…)

Data Management Moves to the Fore. Part 4: Why Cognitive Data Management?
Posted by Jon Toigo on May 11, 2017

In previous installments of this blog, we have deconstructed the idea of cognitive data management (CDM) to identify its “moving parts” and to define what each part contributes to a holistic process for managing files and more structured content.

First and foremost, CDM requires a Policy Management Framework that identifies classes of data and specifies their hosting, protection, preservation and privacy requirements of each data class over its useful life.  This component reflects the nature of data, whose access requirements and protection priorities tend to change over time.

Elements of a Cognitive Data Management platform

(more…)

Design a ROBO infrastructure (Part 3): Infrastructure at remote office side
Posted by Andrea Mauro on March 30, 2017

Design a ROBO scenario must match finally the reality of the customers’ needs, its constraints but also the type of workload and the possible availability solutions of them.

Logical design of a ROBO scenario

When can found the different type of approaches:

  • No server(s) at
  • Few servers (that maybe can fail)
  • Some servers with “relaxed” availability requirements
  • Some servers with reasonable availability

Let’s analyze each of them.

Design a ROBO scenario must match finally the reality of the customers’ needs, its constraints but also the type of workload and the possible availability solutions of them.

Hyper-Converged Infrastructure structure

(more…)

Design a ROBO infrastructure (Part 2): Design areas and technologies
Posted by Andrea Mauro on February 24, 2017

In the previous post, we have explained and described business requirements and constraints in order to support design and implementation decisions suited for mission-critical applications, considering also how risk can affect design decisions.

Now we will match the following technology aspects to satisfy design requirements:

  • Availability
  • Manageability
  • Performance and scaling
  • Recoverability
  • Security
  • Risk and budget management

ROBO Design areas and technologies

(more…)

The Virtualization Review Editor’s Choice Awards 2016
Posted by Oksana Zybinskaya on December 26, 2016

awards

The Virtualization Review Editor’s Choice is a selection of the most outstanding virtualization products of 2016. It is based on the opinions and overlooks by the trusted experts in the fields of virtualization and cloud computing. This is not the “best of the best rating”. No criteria were applied to make the list. This is just the collection of individual choices of writers, who deal with the industry daily, so they have pointed out virtualization solutions they found especially interesting and useful.

(more…)

Musings on Windows Server Converged Networking & Storage
Posted by Didier Van Hoye on August 19, 2016

Why you should learn about SMB Direct, RDMA & lossless Ethernet for both networking & storage solutions

fully converged Hyper-V Qos Courtesy of Microsoft

Server, Hypervisor, Storage

Too many people still perceive Windows Server as “just” an operating system (OS). It’s so much more. It’s an OS, a hypervisor, a storage platform with a highly capable networking stack. Both virtualization and cloud computing are driving the convergence of all the above these roles forward fast, with intent and purpose. We’ll position the technologies & designs that convergence requires and look at the implications of these for a better overall understanding of this trend.
(more…)

Is NVMe Really Revolutionary?
Posted by Jon Toigo on August 19, 2016

To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing.  While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.

I am not against faster I/O processing, of course.  It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine.  Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle.  That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.

latency comparison

(more…)