MENU
The Need For Liquidity in Data Storage Infrastructure
Posted by Jon Toigo on August 17, 2017
No ratings yet.

Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show.  As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used.  Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.”

When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability.  High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.

Hard disk drive cost per gigabyte

(more…)

Please rate this

Ceph-all-in-one
Posted by Alex Bykovskyi on August 16, 2017
No ratings yet.

Introduction

This article describes the deployment of a Ceph cluster in one instance or as it’s called “Ceph-all-in-one”. As you may know, Ceph is a unified Software-Defined Storage system designed for great performance, reliability, and scalability. With the help of Ceph, you can build an environment with the desired size. You can start with a one-node system and there are no limits in its sizing. I will show you how to build the Ceph cluster on top of one virtual machine (or instance). You should never use such scenario in production, only for testing purposes.

The series of articles will guide you through the deployment and configuration of different Ceph cluster builds.

 

check the ceph cluster status

(more…)

Please rate this

Microsoft Azure Stack in General Availability (GA) and Customers will Receive it in September. Why is this Important? Part I
Posted by Augusto Alvarez on July 18, 2017
4.8/5 (5)

Microsoft’s Hybrid Cloud appliance to run Azure in your datacenter has finally reached to General Availability (GA) and the Integration Systems (Dell EMC, HPE and Lenovo for this first iteration) are formally taking orders from customers, which will receive their Azure Stack solution in September. But, what exactly represents Azure Stack? Why is this important to organizations?

Microsoft Azure Stack logo

(more…)

Please rate this

Design a ROBO infrastructure. Part 4: HCI solutions
Posted by Andrea Mauro on June 7, 2017
5/5 (1)

2-nodes hyperconverged solution

As written in the previous post, for ROBO scenario the most interesting HCI (Hyper-Converged Infrastructure) configuration is a two nodes configuration, considering that two nodes could be enough to run dozen VMs (or also more).

For this reason, not all hyperconverged solutions could be suitable for this case (for example Nutanix or Simplivity need at least 3 nodes). And is not simple scale down an enterprise solution to a small size, due to the architecture constraints.

Actually, there are some interesting products specific for HCI in ROBO scenario:

  • VMware Virtual SAN in a 2 nodes clusters
  • StarWind Virtual Storage Appliance
  • StorMagic SvSAN

StarWind Virtual SAN overall architecture

(more…)

Please rate this

Data Management Moves to the Fore. Part 3: Data Management Requires Storage Resource and Services Management Too
Posted by Jon Toigo on April 7, 2017
No ratings yet.

Previously, we discussed how data might be classified and segregated so that policies could be developed to place data on infrastructure in a deliberative manner – that is,  in a way that optimizes data access, storage resources and services, and storage costs over the useful life of the data itself.  From the standpoint of cognitive data management, data management policies constitute the instructions or programs that the cognitive engine processes to place and move data on and within infrastructure over time.

Cognitive Data Management Facility

(more…)

Please rate this

Data Management Moves to the Fore. Part 2: Data Management Has Many Moving Parts
Posted by Jon Toigo on April 4, 2017
4/5 (1)

In the previous blog, we established that there is a growing need to focus on Capacity Utilization Efficiency in order to “bend the cost curve” in storage.  Just balancing data placement across repositories (Capacity Allocation Efficiency) is insufficient to cope with the impact of data growth and generally poor management.  Only by placing data on infrastructure in a deliberative manner that optimizes data access and storage services and costs, can IT pros possibly cope with the coming data deluge anticipated by industry analysts.

The problem with data management is that it hasn’t been advocated or encouraged by vendors in the storage industry.  Mismanaged data, simply put, drives the need for more capacity – and sells more kit.

COMPONENTS OF A COGNITIVE DATA MANAGEMENT SOLUTION

(more…)

Please rate this

Software-Defined Storage: StarWind Virtual SAN vs Microsoft Storage Spaces Direct vs VMware Virtual SAN
Posted by Anton Kolomyeytsev on June 16, 2016
4.75/5 (4)

This is a comprehensive comparison of the leading products of the Software-Defined Storage market, featuring Microsoft Storage Spaces Direct, VMware Virtual SAN and StarWind Virtual SAN. It provides numerous use cases, based on different deployment scales and architectures, because the mentioned products all have different aims. As the market is already large enough, the vendors used to dwell its different parts, but lately they entered a full-scale competition, adapting their products to meet general demand. This post is an analysis of how Microsoft, VMware and StarWind fare in in the Software-Defined Storage market right now. The approach is practical and all the statements are based on the experience of virtualization administrators and engineers from all over the world.

SMB and ROBO

(more…)

Please rate this

Manage It Already
Posted by Jon Toigo on April 27, 2016
4.5/5 (2)

As I review the marketing pitches of many software-defined storage products today, I am concerned by the lack of attention in any of the software stack descriptions to any capabilities whatsoever for managing the underlying hardware infrastructure.  This strikes me as a huge oversight.

The truth is that delivering storage services via software — orchestrating and administering the delivery of capacity, data encryption, data protection and other services to the data that are hosted on a software-defined storage volume – is only half of the challenge of storage administration.  The other part is maintaining the health and integrity of the gear and the interconnect cabling that provide the all-important physical underlayment of an increasingly virtualized world.

1

(more…)

Please rate this

World Backup Day Is Coming
Posted by Jon Toigo on March 16, 2016
5/5 (7)

At the end of March, an event little known outside of a small community of vendors, will happen:  World Backup Day.  Expect a flurry of blogs and tweets and posts and all of the other stuff that goes along with such marketing events.  Then, expect the discussion to go silent for another year…unless a newsworthy data disaster occurs.

Truth be told, backup has never been front of mind for IT planners.  Most planners don’t even consider how they will back up the data they will be storing when then go out to purchase storage rigs.  And most have no clue regarding which data needs to be protected.  Backup is an afterthought.

Backup steps

(more…)

Please rate this

HYPER-CONVERGENCE TAKES HOLD
Posted by Jon Toigo on February 18, 2016
4.38/5 (8)

Hyper-converged infrastructure, when we started to hear about it last year, was simply an “appliantization” of the architecture and technology of software-defined storage (SDS) technology running in concert with server virtualization technology. Appliantization means that the gear peddler was doing the heavy lift of pre-integrating server and storage hardware with hypervisor and SDS hardware so that the resulting kit would be pretty much plug-and-play.

HCI

(more…)

Please rate this