Free Webinar
November 15 | 11am PT / 2pm ET
VMware & StarWind: Guarantee data safety and constant applications availability
Speaker: Alexey Khorolets, Pre-Sales Engineer, StarWind
Posted by Paulsen Muzari on March 14, 2018
Whip your Hyperconverged Failover Cluster into shape automatically and with no downtime using Microsoft’s Cluster Aware Updating

Some admins prefer the Cluster updates to be done automatically. To do so, Microsoft designed a feature to facilitate patching of Windows Servers from 2012 to 2016 that are configured in a failover cluster. Cluster Aware Updating (CAU) does this automatically, thereby avoiding service disruption for clustered roles. In this article, we are going to take a look into how we can achieve this assuming that Cluster is built with hyperconverged scenario and StarWind Virtual SAN used as a shared storage. Before going in the steps to set the CAU, we will investigate this scenario.

Learn More

Posted by Jon Toigo on October 11, 2017
Back to Enterprise Storage

An under-reported trend in storage these days is the mounting dissatisfaction with server-centric storage infrastructure as conceived by proprietary server hypervisor vendors and implemented as exclusive software-defined storage stacks.  A few years ago, the hypervisor vendors seized on consumer anger around overpriced “value-add” storage arrays to insert a “new” modality of storage, so-called software-defined storage, into the IT lexicon.  Touted as a solution for everything that ailed storage – and as a way to improve virtual machine performance in the process – SDS and hyper-converged infrastructure did rather well in the market.  However, the downside of creating silo’ed storage behind server hosts was that storage efficiency declined by 10 percent or more on an enterprise-wide basis; companies were realizing less bang for the buck with software-defined storage than with the enterprise storage platforms they were replacing.

Learn More

Posted by Jon Toigo on August 17, 2017
The Need For Liquidity in Data Storage Infrastructure

Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show.  As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used.  Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.” When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability.  High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.

Learn More

Posted by Andrea Mauro on June 7, 2017
Design a ROBO infrastructure. Part 4: HCI solutions

As written in the previous post, for ROBO scenario the most interesting HCI (Hyper-Converged Infrastructure) configuration is a two nodes configuration, considering that two nodes could be enough to run dozen VMs (or also more). For this reason, not all hyperconverged solutions could be suitable for this case (for example Nutanix or Simplivity need at least 3 nodes). And is not simple scale down an enterprise solution to a small size, due to the architecture constraints.

Learn More

Posted by Jon Toigo on May 11, 2017
Data Management Moves to the Fore. Part 4: Why Cognitive Data Management?

In previous installments of this blog, we have deconstructed the idea of cognitive data management (CDM) to identify its “moving parts” and to define what each part contributes to a holistic process for managing files and more structured content. First and foremost, CDM requires a Policy Management Framework that identifies classes of data and specifies their hosting, protection, preservation and privacy requirements of each data class over its useful life.  This component reflects the nature of data, whose access requirements and protection priorities tend to change over time.

Learn More

Posted by Andrea Mauro on March 30, 2017
Design a ROBO infrastructure (Part 3): Infrastructure at remote office side

Design a ROBO scenario must match finally the reality of the customers’ needs, its constraints but also the type of workload and the possible availability solutions of them.

Learn More

Posted by Andrea Mauro on February 24, 2017
Design a ROBO infrastructure (Part 2): Design areas and technologies

In the previous post, we have explained and described business requirements and constraints in order to support design and implementation decisions suited for mission-critical applications, considering also how risk can affect design decisions.

Learn More

Posted by Oksana Zybinskaya on December 26, 2016
The Virtualization Review Editor’s Choice Awards 2016

The Virtualization Review Editor’s Choice is a selection of the most outstanding virtualization products of 2016. It is based on the opinions and overlooks by the trusted experts in the fields of virtualization and cloud computing. This is not the “best of the best rating”. No criteria were applied to make the list. This is just the collection of individual choices of writers, who deal with the industry daily, so they have pointed out virtualization solutions they found especially interesting and useful.

Learn More

Posted by Didier Van Hoye on August 19, 2016
Musings on Windows Server Converged Networking & Storage

Why you should learn about SMB Direct, RDMA & lossless Ethernet for both networking & storage solutions

fully converged Hyper-V Qos Courtesy of Microsoft

Server, Hypervisor, Storage

Too many people still perceive Windows Server as “just” an operating system (OS). It’s so much more. It’s an OS, a hypervisor, a storage platform with a highly capable networking stack. Both virtualization and cloud computing are driving the convergence of all the above these roles forward fast, with intent and purpose. We’ll position the technologies & designs that convergence requires and look at the implications of these for a better overall understanding of this trend.
Learn More

Posted by Jon Toigo on August 19, 2016
Is NVMe Really Revolutionary?

To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing.  While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.

I am not against faster I/O processing, of course.  It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine.  Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle.  That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.

latency comparison

Learn More