MENU
Back to Enterprise Storage
Posted by Jon Toigo on October 11, 2017
No ratings yet.

An under-reported trend in storage these days is the mounting dissatisfaction with server-centric storage infrastructure as conceived by proprietary server hypervisor vendors and implemented as exclusive software-defined storage stacks.  A few years ago, the hypervisor vendors seized on consumer anger around overpriced “value-add” storage arrays to insert a “new” modality of storage, so-called software-defined storage, into the IT lexicon.  Touted as a solution for everything that ailed storage – and as a way to improve virtual machine performance in the process – SDS and hyper-converged infrastructure did rather well in the market.  However, the downside of creating silo’ed storage behind server hosts was that storage efficiency declined by 10 percent or more on an enterprise-wide basis; companies were realizing less bang for the buck with software-defined storage than with the enterprise storage platforms they were replacing.

Shared and Silo'ed storages' components

Learn More

Please rate this

The Pleasant Fiction of Software-Defined Storage
Posted by Jon Toigo on August 22, 2017
3/5 (2)

Whether you have heard it called software-defined storage, referring to a stack of software used to dedicate an assemblage of commodity storage hardware to a virtualized workload, or hyper-converged infrastructure (HCI), referring to a hardware appliance with a software-defined storage stack and maybe a hypervisor pre-configured and embedded, this “revolutionary” approach to building storage was widely hailed as your best hope for bending the storage cost curve once and for all.  With storage spending accounting for a sizable percentage – often more than 50% — of a medium-to-large organization’s annual IT hardware budget, you probably welcomed the idea of an SDS/HCI solution when the idea surfaced in the trade press, in webinars and at conferences and trade shows a few years ago.

storage total cost of ownership

Learn More

Please rate this

The Need For Liquidity in Data Storage Infrastructure
Posted by Jon Toigo on August 17, 2017
No ratings yet.

Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show.  As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used.  Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.”

When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability.  High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.

Hard disk drive cost per gigabyte

Learn More

Please rate this

Data Management Moves to the Fore. Part 4: Why Cognitive Data Management?
Posted by Jon Toigo on May 11, 2017
5/5 (1)

In previous installments of this blog, we have deconstructed the idea of cognitive data management (CDM) to identify its “moving parts” and to define what each part contributes to a holistic process for managing files and more structured content.

First and foremost, CDM requires a Policy Management Framework that identifies classes of data and specifies their hosting, protection, preservation and privacy requirements of each data class over its useful life.  This component reflects the nature of data, whose access requirements and protection priorities tend to change over time.

Elements of a Cognitive Data Management platform

Learn More

Please rate this

Data Management Moves to the Fore. Part 3: Data Management Requires Storage Resource and Services Management Too
Posted by Jon Toigo on April 7, 2017
No ratings yet.

Previously, we discussed how data might be classified and segregated so that policies could be developed to place data on infrastructure in a deliberative manner – that is,  in a way that optimizes data access, storage resources and services, and storage costs over the useful life of the data itself.  From the standpoint of cognitive data management, data management policies constitute the instructions or programs that the cognitive engine processes to place and move data on and within infrastructure over time.

Cognitive Data Management Facility

Learn More

Please rate this

Data Management Moves to the Fore. Part 2: Data Management Has Many Moving Parts
Posted by Jon Toigo on April 4, 2017
4/5 (1)

In the previous blog, we established that there is a growing need to focus on Capacity Utilization Efficiency in order to “bend the cost curve” in storage.  Just balancing data placement across repositories (Capacity Allocation Efficiency) is insufficient to cope with the impact of data growth and generally poor management.  Only by placing data on infrastructure in a deliberative manner that optimizes data access and storage services and costs, can IT pros possibly cope with the coming data deluge anticipated by industry analysts.

The problem with data management is that it hasn’t been advocated or encouraged by vendors in the storage industry.  Mismanaged data, simply put, drives the need for more capacity – and sells more kit.

COMPONENTS OF A COGNITIVE DATA MANAGEMENT SOLUTION

Learn More

Please rate this

Data Management Moves to the Fore. Part 1: Sorting Out the Storage Junk Drawer
Posted by Jon Toigo on March 28, 2017
No ratings yet.

Most presentations one hears at industry trade shows and conferences have to do, fundamentally, with Capacity Allocation Efficiency (CAE).  CAE seeks to answer a straightforward question:  Given a storage capacity of x petabytes or y exabytes, how will we divvy up space to workload data in a way that reduces the likelihood of a catastrophic “disk full” error?

Essentially, from a CAE perspective, efficiency involves balancing the volume of bits across physical storage repositories in a way that does not leave one container nearly full while another has mostly unused space.  The reason is simple.  As the volume of data grows and the capacity of media (whether disk or flash) increases, a lot of data – with many users — can find its way into a single repository.  In so doing, access to the data can be impaired (a lot of access requests across a few bus connections can introduce latency).  This, in turn, shows up in slower application performance, whether the workload is a database or a virtual machine.

Survey of 2000 company disk storage envitonments

Learn More

Please rate this

Data Management Moves to the Fore. Introduction
Posted by Jon Toigo on March 23, 2017
4.5/5 (2)

Data Management Moves to the Fore

To the extent that the trade press covers, meaningfully, the issues around digital information processing and technology, it tends to focus rather narrowly on the latter:  infrastructure.  The latest hardware innovation — the fastest processor, the slickest server, the most robust hyper-converged infrastructure appliance — tends to be the shiny new thing, hogging the coverage.

Occasionally, software gets a shot at the headlines:  hypervisors, containers, object storage systems, even APIs get their 10 minutes of fame from time to time.  But, even in these days of virtual servers and software-defined networks and storage, software is less entertaining than hardware and tends to get less coverage than tin and silicon.

Learn More

Please rate this

Are We Trending Toward Disaster?
Posted by Jon Toigo on October 6, 2016
4/5 (1)

Interestingly, in the enterprise data center trade shows I have attended recently, the focus was on systemic risk and systemic performance rather than on discrete products or technologies; exactly the opposite of what I’ve read about hypervisor and cloud shows, where the focus has been on faster processors, faster storage (NVMe, 3D NAND) and faster networks (100 GbE).  This may be a reflection of the two communities of practitioners that exist in contemporary IT:  the AppDev folks and the Ops folks.

Data managment

Learn More

Please rate this

Is NVMe Really Revolutionary?
Posted by Jon Toigo on August 19, 2016
4/5 (1)

To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing.  While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.

I am not against faster I/O processing, of course.  It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine.  Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle.  That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.

latency comparison

Learn More

Please rate this