MENU
The Need For Liquidity in Data Storage Infrastructure
Posted by Jon Toigo on August 17, 2017
No ratings yet.

Liquidity is a term you are more likely to hear on a financial news channel than at a technology trade show.  As an investment-related term, liquidity refers the amount of capital available to banks and businesses and to how readily it can be used.  Assets that can be converted quickly to cash (preferably with minimal loss in value) in order to meet immediate and short term obligations are considered “liquid.”

When it comes to data storage, liquid storage assets can be viewed as those that can be allocated to virtually any workload at any time without compromising performance, cost-efficiency/manageability, resiliency, or scalability.  High liquidity storage supports any workload operating under any OS, hypervisor, or container technology, accessed via any protocol (network file systems, object storage, block network, etc.), without sacrificing data protection, capacity scaling, or performance optimization.

Hard disk drive cost per gigabyte

Learn More

Please rate this

Data Management Moves to the Fore. Part 4: Why Cognitive Data Management?
Posted by Jon Toigo on May 11, 2017
5/5 (1)

In previous installments of this blog, we have deconstructed the idea of cognitive data management (CDM) to identify its “moving parts” and to define what each part contributes to a holistic process for managing files and more structured content.

First and foremost, CDM requires a Policy Management Framework that identifies classes of data and specifies their hosting, protection, preservation and privacy requirements of each data class over its useful life.  This component reflects the nature of data, whose access requirements and protection priorities tend to change over time.

Elements of a Cognitive Data Management platform

Learn More

Please rate this

Data Management Moves to the Fore. Part 3: Data Management Requires Storage Resource and Services Management Too
Posted by Jon Toigo on April 7, 2017
No ratings yet.

Previously, we discussed how data might be classified and segregated so that policies could be developed to place data on infrastructure in a deliberative manner – that is,  in a way that optimizes data access, storage resources and services, and storage costs over the useful life of the data itself.  From the standpoint of cognitive data management, data management policies constitute the instructions or programs that the cognitive engine processes to place and move data on and within infrastructure over time.

Cognitive Data Management Facility

Learn More

Please rate this

Data Management Moves to the Fore. Part 2: Data Management Has Many Moving Parts
Posted by Jon Toigo on April 4, 2017
4/5 (1)

In the previous blog, we established that there is a growing need to focus on Capacity Utilization Efficiency in order to “bend the cost curve” in storage.  Just balancing data placement across repositories (Capacity Allocation Efficiency) is insufficient to cope with the impact of data growth and generally poor management.  Only by placing data on infrastructure in a deliberative manner that optimizes data access and storage services and costs, can IT pros possibly cope with the coming data deluge anticipated by industry analysts.

The problem with data management is that it hasn’t been advocated or encouraged by vendors in the storage industry.  Mismanaged data, simply put, drives the need for more capacity – and sells more kit.

COMPONENTS OF A COGNITIVE DATA MANAGEMENT SOLUTION

Learn More

Please rate this

Data Management Moves to the Fore. Part 1: Sorting Out the Storage Junk Drawer
Posted by Jon Toigo on March 28, 2017
No ratings yet.

Most presentations one hears at industry trade shows and conferences have to do, fundamentally, with Capacity Allocation Efficiency (CAE).  CAE seeks to answer a straightforward question:  Given a storage capacity of x petabytes or y exabytes, how will we divvy up space to workload data in a way that reduces the likelihood of a catastrophic “disk full” error?

Essentially, from a CAE perspective, efficiency involves balancing the volume of bits across physical storage repositories in a way that does not leave one container nearly full while another has mostly unused space.  The reason is simple.  As the volume of data grows and the capacity of media (whether disk or flash) increases, a lot of data – with many users — can find its way into a single repository.  In so doing, access to the data can be impaired (a lot of access requests across a few bus connections can introduce latency).  This, in turn, shows up in slower application performance, whether the workload is a database or a virtual machine.

Survey of 2000 company disk storage envitonments

Learn More

Please rate this

Data Management Moves to the Fore. Introduction
Posted by Jon Toigo on March 23, 2017
4.5/5 (2)

Data Management Moves to the Fore

To the extent that the trade press covers, meaningfully, the issues around digital information processing and technology, it tends to focus rather narrowly on the latter:  infrastructure.  The latest hardware innovation — the fastest processor, the slickest server, the most robust hyper-converged infrastructure appliance — tends to be the shiny new thing, hogging the coverage.

Occasionally, software gets a shot at the headlines:  hypervisors, containers, object storage systems, even APIs get their 10 minutes of fame from time to time.  But, even in these days of virtual servers and software-defined networks and storage, software is less entertaining than hardware and tends to get less coverage than tin and silicon.

Learn More

Please rate this

Are We Trending Toward Disaster?
Posted by Jon Toigo on October 6, 2016
4/5 (1)

Interestingly, in the enterprise data center trade shows I have attended recently, the focus was on systemic risk and systemic performance rather than on discrete products or technologies; exactly the opposite of what I’ve read about hypervisor and cloud shows, where the focus has been on faster processors, faster storage (NVMe, 3D NAND) and faster networks (100 GbE).  This may be a reflection of the two communities of practitioners that exist in contemporary IT:  the AppDev folks and the Ops folks.

Data managment

Learn More

Please rate this

Is NVMe Really Revolutionary?
Posted by Jon Toigo on August 19, 2016
4/5 (1)

To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing.  While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.

I am not against faster I/O processing, of course.  It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine.  Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle.  That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.

latency comparison

Learn More

Please rate this

Manage It Already
Posted by Jon Toigo on April 27, 2016
4.5/5 (2)

As I review the marketing pitches of many software-defined storage products today, I am concerned by the lack of attention in any of the software stack descriptions to any capabilities whatsoever for managing the underlying hardware infrastructure.  This strikes me as a huge oversight.

The truth is that delivering storage services via software — orchestrating and administering the delivery of capacity, data encryption, data protection and other services to the data that are hosted on a software-defined storage volume – is only half of the challenge of storage administration.  The other part is maintaining the health and integrity of the gear and the interconnect cabling that provide the all-important physical underlayment of an increasingly virtualized world.

1

Learn More

Please rate this

Let’s Get Real About Data Protection and Disaster Recovery
Posted by Jon Toigo on April 7, 2016
4.75/5 (4)

Personally, I am getting rather tired of the dismissive tone adopted by virtualization and cloud vendors when you raise the issue of disaster recovery.  We previously discussed the limited scope of virtual systems clustering and failover:  active-passive and active-active server clusters with data mirroring is generally inadequate for recovery from interruption events that have a footprint larger than a given equipment rack or subnetwork.  Extending mirroring and cluster failover over distances greater than 80 kilometers is a dicey strategy, especially given the impact of latency and jitter on data transport over WAN links, which can create data deltas that can prevent successful application or database recovery altogether.

HA or DR

Learn More

Please rate this