MENU
Data Management Moves to the Fore. Introduction
Posted by Jon Toigo on March 23, 2017
5/5 (1)

Data Management Moves to the Fore

To the extent that the trade press covers, meaningfully, the issues around digital information processing and technology, it tends to focus rather narrowly on the latter:  infrastructure.  The latest hardware innovation — the fastest processor, the slickest server, the most robust hyper-converged infrastructure appliance — tends to be the shiny new thing, hogging the coverage.

Occasionally, software gets a shot at the headlines:  hypervisors, containers, object storage systems, even APIs get their 10 minutes of fame from time to time.  But, even in these days of virtual servers and software-defined networks and storage, software is less entertaining than hardware and tends to get less coverage than tin and silicon.

Learn More

Please rate this

Are We Trending Toward Disaster?
Posted by Jon Toigo on October 6, 2016
4/5 (1)

Interestingly, in the enterprise data center trade shows I have attended recently, the focus was on systemic risk and systemic performance rather than on discrete products or technologies; exactly the opposite of what I’ve read about hypervisor and cloud shows, where the focus has been on faster processors, faster storage (NVMe, 3D NAND) and faster networks (100 GbE).  This may be a reflection of the two communities of practitioners that exist in contemporary IT:  the AppDev folks and the Ops folks.

Data managment

Learn More

Please rate this

Is NVMe Really Revolutionary?
Posted by Jon Toigo on August 19, 2016
4/5 (1)

To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing.  While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.

I am not against faster I/O processing, of course.  It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine.  Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle.  That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.

latency comparison

Learn More

Please rate this

Manage It Already
Posted by Jon Toigo on April 27, 2016
4.5/5 (2)

As I review the marketing pitches of many software-defined storage products today, I am concerned by the lack of attention in any of the software stack descriptions to any capabilities whatsoever for managing the underlying hardware infrastructure.  This strikes me as a huge oversight.

The truth is that delivering storage services via software — orchestrating and administering the delivery of capacity, data encryption, data protection and other services to the data that are hosted on a software-defined storage volume – is only half of the challenge of storage administration.  The other part is maintaining the health and integrity of the gear and the interconnect cabling that provide the all-important physical underlayment of an increasingly virtualized world.

1

Learn More

Please rate this

Let’s Get Real About Data Protection and Disaster Recovery
Posted by Jon Toigo on April 7, 2016
4.75/5 (4)

Personally, I am getting rather tired of the dismissive tone adopted by virtualization and cloud vendors when you raise the issue of disaster recovery.  We previously discussed the limited scope of virtual systems clustering and failover:  active-passive and active-active server clusters with data mirroring is generally inadequate for recovery from interruption events that have a footprint larger than a given equipment rack or subnetwork.  Extending mirroring and cluster failover over distances greater than 80 kilometers is a dicey strategy, especially given the impact of latency and jitter on data transport over WAN links, which can create data deltas that can prevent successful application or database recovery altogether.

HA or DR

Learn More

Please rate this

World Backup Day Is Coming
Posted by Jon Toigo on March 16, 2016
5/5 (7)

At the end of March, an event little known outside of a small community of vendors, will happen:  World Backup Day.  Expect a flurry of blogs and tweets and posts and all of the other stuff that goes along with such marketing events.  Then, expect the discussion to go silent for another year…unless a newsworthy data disaster occurs.

Truth be told, backup has never been front of mind for IT planners.  Most planners don’t even consider how they will back up the data they will be storing when then go out to purchase storage rigs.  And most have no clue regarding which data needs to be protected.  Backup is an afterthought.

Backup steps

Learn More

Please rate this

HYPER-CONVERGENCE TAKES HOLD
Posted by Jon Toigo on February 18, 2016
4.38/5 (8)

Hyper-converged infrastructure, when we started to hear about it last year, was simply an “appliantization” of the architecture and technology of software-defined storage (SDS) technology running in concert with server virtualization technology. Appliantization means that the gear peddler was doing the heavy lift of pre-integrating server and storage hardware with hypervisor and SDS hardware so that the resulting kit would be pretty much plug-and-play.

HCI

Learn More

Please rate this

Hyper-Converged Needs to Get Beyond the Hype
Posted by Jon Toigo on January 25, 2016
4.29/5 (7)

It used to be that, when you bought a server with a NIC card and some internal or direct attached storage, it was simply called a server. If it had some tiered storage – different media with different performance characteristics and different capacities – and some intelligence for moving data across “tiers,” we called it an “enterprise server”. If the server and storage kit were clustered, we called it a high availability enterprise server. Over the past year, though, we have gone through a collective terminology refresh.

Today, you cobble together a server with some software-defined storage software, a hypervisor, and some internal or external flash and/or disk and the result is called “hyper-converged infrastructure.” Given the lack of consistency in what people mean when they say “hyper-converged,” we may be talking about any collection of gear and software that a vendor has “pre-integrated” before marking up the kit and selling it for a huge profit. Having recently requested information from so-called hyper-converged infrastructure vendors, I was amazed at some of the inquiries I received from would-be participants.

HCI types

Learn More

Please rate this