MENU
Data Management Moves to the Fore. Part 3: Data Management Requires Storage Resource and Services Management Too
Posted by Jon Toigo on April 7, 2017
No ratings yet.

Previously, we discussed how data might be classified and segregated so that policies could be developed to place data on infrastructure in a deliberative manner – that is,  in a way that optimizes data access, storage resources and services, and storage costs over the useful life of the data itself.  From the standpoint of cognitive data management, data management policies constitute the instructions or programs that the cognitive engine processes to place and move data on and within infrastructure over time.

Cognitive Data Management Facility

Learn More

Please rate this

Data Management Moves to the Fore. Part 2: Data Management Has Many Moving Parts
Posted by Jon Toigo on April 4, 2017
4/5 (1)

In the previous blog, we established that there is a growing need to focus on Capacity Utilization Efficiency in order to “bend the cost curve” in storage.  Just balancing data placement across repositories (Capacity Allocation Efficiency) is insufficient to cope with the impact of data growth and generally poor management.  Only by placing data on infrastructure in a deliberative manner that optimizes data access and storage services and costs, can IT pros possibly cope with the coming data deluge anticipated by industry analysts.

The problem with data management is that it hasn’t been advocated or encouraged by vendors in the storage industry.  Mismanaged data, simply put, drives the need for more capacity – and sells more kit.

COMPONENTS OF A COGNITIVE DATA MANAGEMENT SOLUTION

Learn More

Please rate this

Data Management Moves to the Fore. Part 1: Sorting Out the Storage Junk Drawer
Posted by Jon Toigo on March 28, 2017
No ratings yet.

Most presentations one hears at industry trade shows and conferences have to do, fundamentally, with Capacity Allocation Efficiency (CAE).  CAE seeks to answer a straightforward question:  Given a storage capacity of x petabytes or y exabytes, how will we divvy up space to workload data in a way that reduces the likelihood of a catastrophic “disk full” error?

Essentially, from a CAE perspective, efficiency involves balancing the volume of bits across physical storage repositories in a way that does not leave one container nearly full while another has mostly unused space.  The reason is simple.  As the volume of data grows and the capacity of media (whether disk or flash) increases, a lot of data – with many users — can find its way into a single repository.  In so doing, access to the data can be impaired (a lot of access requests across a few bus connections can introduce latency).  This, in turn, shows up in slower application performance, whether the workload is a database or a virtual machine.

Survey of 2000 company disk storage envitonments

Learn More

Please rate this

Data Management Moves to the Fore. Introduction
Posted by Jon Toigo on March 23, 2017
4.5/5 (2)

Data Management Moves to the Fore

To the extent that the trade press covers, meaningfully, the issues around digital information processing and technology, it tends to focus rather narrowly on the latter:  infrastructure.  The latest hardware innovation — the fastest processor, the slickest server, the most robust hyper-converged infrastructure appliance — tends to be the shiny new thing, hogging the coverage.

Occasionally, software gets a shot at the headlines:  hypervisors, containers, object storage systems, even APIs get their 10 minutes of fame from time to time.  But, even in these days of virtual servers and software-defined networks and storage, software is less entertaining than hardware and tends to get less coverage than tin and silicon.

Learn More

Please rate this

Are We Trending Toward Disaster?
Posted by Jon Toigo on October 6, 2016
4/5 (1)

Interestingly, in the enterprise data center trade shows I have attended recently, the focus was on systemic risk and systemic performance rather than on discrete products or technologies; exactly the opposite of what I’ve read about hypervisor and cloud shows, where the focus has been on faster processors, faster storage (NVMe, 3D NAND) and faster networks (100 GbE).  This may be a reflection of the two communities of practitioners that exist in contemporary IT:  the AppDev folks and the Ops folks.

Data managment

Learn More

Please rate this

Is NVMe Really Revolutionary?
Posted by Jon Toigo on August 19, 2016
4/5 (1)

To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing.  While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.

I am not against faster I/O processing, of course.  It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine.  Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle.  That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.

latency comparison

Learn More

Please rate this

Manage It Already
Posted by Jon Toigo on April 27, 2016
4.5/5 (2)

As I review the marketing pitches of many software-defined storage products today, I am concerned by the lack of attention in any of the software stack descriptions to any capabilities whatsoever for managing the underlying hardware infrastructure.  This strikes me as a huge oversight.

The truth is that delivering storage services via software — orchestrating and administering the delivery of capacity, data encryption, data protection and other services to the data that are hosted on a software-defined storage volume – is only half of the challenge of storage administration.  The other part is maintaining the health and integrity of the gear and the interconnect cabling that provide the all-important physical underlayment of an increasingly virtualized world.

1

Learn More

Please rate this

Let’s Get Real About Data Protection and Disaster Recovery
Posted by Jon Toigo on April 7, 2016
4.75/5 (4)

Personally, I am getting rather tired of the dismissive tone adopted by virtualization and cloud vendors when you raise the issue of disaster recovery.  We previously discussed the limited scope of virtual systems clustering and failover:  active-passive and active-active server clusters with data mirroring is generally inadequate for recovery from interruption events that have a footprint larger than a given equipment rack or subnetwork.  Extending mirroring and cluster failover over distances greater than 80 kilometers is a dicey strategy, especially given the impact of latency and jitter on data transport over WAN links, which can create data deltas that can prevent successful application or database recovery altogether.

HA or DR

Learn More

Please rate this

World Backup Day Is Coming
Posted by Jon Toigo on March 16, 2016
5/5 (7)

At the end of March, an event little known outside of a small community of vendors, will happen:  World Backup Day.  Expect a flurry of blogs and tweets and posts and all of the other stuff that goes along with such marketing events.  Then, expect the discussion to go silent for another year…unless a newsworthy data disaster occurs.

Truth be told, backup has never been front of mind for IT planners.  Most planners don’t even consider how they will back up the data they will be storing when then go out to purchase storage rigs.  And most have no clue regarding which data needs to be protected.  Backup is an afterthought.

Backup steps

Learn More

Please rate this

HYPER-CONVERGENCE TAKES HOLD
Posted by Jon Toigo on February 18, 2016
4.38/5 (8)

Hyper-converged infrastructure, when we started to hear about it last year, was simply an “appliantization” of the architecture and technology of software-defined storage (SDS) technology running in concert with server virtualization technology. Appliantization means that the gear peddler was doing the heavy lift of pre-integrating server and storage hardware with hypervisor and SDS hardware so that the resulting kit would be pretty much plug-and-play.

HCI

Learn More

Please rate this