Free Webinar
October 11 | 11am PT / 2pm ET
Learn how to build an IT infrastructure of your dream
with Dell EMC PowerEdge 14G servers
Speaker: Ivan Talaichuk, Pre-Sales Engineer, StarWind
Posted by Jon Toigo on October 11, 2017
Back to Enterprise Storage

An under-reported trend in storage these days is the mounting dissatisfaction with server-centric storage infrastructure as conceived by proprietary server hypervisor vendors and implemented as exclusive software-defined storage stacks.  A few years ago, the hypervisor vendors seized on consumer anger around overpriced “value-add” storage arrays to insert a “new” modality of storage, so-called software-defined storage, into the IT lexicon.  Touted as a solution for everything that ailed storage – and as a way to improve virtual machine performance in the process – SDS and hyper-converged infrastructure did rather well in the market.  However, the downside of creating silo’ed storage behind server hosts was that storage efficiency declined by 10 percent or more on an enterprise-wide basis; companies were realizing less bang for the buck with software-defined storage than with the enterprise storage platforms they were replacing.

Learn More

Posted by Ivan Talaichuk on September 7, 2017
Hyperconvergence – another buzzword or the King of the Throne?

Before we have started our journey through the storage world, I would like to begin with a side note on what is hyperconverged infrastructure and which problems this cool word combination really solves. Folks who already took the grip on hyperconvergence can just skip the first paragraph where I’ll describe HCI components plus a backstory about this tech. Hyperconverged infrastructure (HCI) is a term coined by two great guys: Steve Chambers and Forrester Research (at least Wiki said so). They’ve created this word combination in order to describe a fully software-defined IT infrastructure that is capable of virtualizing all the components of conventional ‘hardware-defined’ systems.

Learn More

Posted by Jon Toigo on August 22, 2017
The Pleasant Fiction of Software-Defined Storage

Whether you have heard it called software-defined storage, referring to a stack of software used to dedicate an assemblage of commodity storage hardware to a virtualized workload, or hyper-converged infrastructure (HCI), referring to a hardware appliance with a software-defined storage stack and maybe a hypervisor pre-configured and embedded, this “revolutionary” approach to building storage was widely hailed as your best hope for bending the storage cost curve once and for all.  With storage spending accounting for a sizable percentage – often more than 50% — of a medium-to-large organization’s annual IT hardware budget, you probably welcomed the idea of an SDS/HCI solution when the idea surfaced in the trade press, in webinars and at conferences and trade shows a few years ago.

Learn More

Posted by Jon Toigo on August 17, 2017
The Need For Liquidity in Data Storage Infrastructure

Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show.  As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used.  Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.” When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability.  High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.

Learn More

Posted by Alex Bykovskyi on August 16, 2017
Ceph-all-in-one

This article describes the deployment of a Ceph cluster in one instance or as it’s called “Ceph-all-in-one”. As you may know, Ceph is a unified Software-Defined Storage system designed for great performance, reliability, and scalability. With the help of Ceph, you can build an environment with the desired size. You can start with a one-node system and there are no limits in its sizing. I will show you how to build the Ceph cluster on top of one virtual machine (or instance). You should never use such scenario in production, only for testing purposes. The series of articles will guide you through the deployment and configuration of different Ceph cluster builds.

Learn More

Posted by Jon Toigo on April 7, 2017
Data Management Moves to the Fore. Part 3: Data Management Requires Storage Resource and Services Management Too

Previously, we discussed how data might be classified and segregated so that policies could be developed to place data on infrastructure in a deliberative manner – that is,  in a way that optimizes data access, storage resources and services, and storage costs over the useful life of the data itself.  From the standpoint of cognitive data management, data management policies constitute the instructions or programs that the cognitive engine processes to place and move data on and within infrastructure over time.

Learn More

Posted by Jon Toigo on April 4, 2017
Data Management Moves to the Fore. Part 2: Data Management Has Many Moving Parts

In the previous blog, we established that there is a growing need to focus on Capacity Utilization Efficiency in order to “bend the cost curve” in storage.  Just balancing data placement across repositories (Capacity Allocation Efficiency) is insufficient to cope with the impact of data growth and generally poor management.  Only by placing data on infrastructure in a deliberative manner that optimizes data access and storage services and costs, can IT pros possibly cope with the coming data deluge anticipated by industry analysts.

Learn More

Posted by Jon Toigo on March 16, 2016
World Backup Day Is Coming

At the end of March, an event little known outside of a small community of vendors, will happen:  World Backup Day.  Expect a flurry of blogs and tweets and posts and all of the other stuff that goes along with such marketing events.  Then, expect the discussion to go silent for another year…unless a newsworthy data disaster occurs.

Truth be told, backup has never been front of mind for IT planners.  Most planners don’t even consider how they will back up the data they will be storing when then go out to purchase storage rigs.  And most have no clue regarding which data needs to be protected.  Backup is an afterthought.

Backup steps

Learn More

Posted by Jon Toigo on February 18, 2016
HYPER-CONVERGENCE TAKES HOLD

Hyper-converged infrastructure, when we started to hear about it last year, was simply an “appliantization” of the architecture and technology of software-defined storage (SDS) technology running in concert with server virtualization technology. Appliantization means that the gear peddler was doing the heavy lift of pre-integrating server and storage hardware with hypervisor and SDS hardware so that the resulting kit would be pretty much plug-and-play.

HCI

Learn More