Design a ROBO infrastructure (Part 2): Design areas and technologies
Posted by Andrea Mauro on
February 24, 2017
In the previous post, we have explained and described business requirements and constraints in order to support design and implementation decisions suited for mission-critical applications, considering also how risk can affect design decisions.
Now we will match the following technology aspects to satisfy design requirements:
- Performance and scaling
- Risk and budget management
Setting yourself up for a success with virtualization
Posted by Michael Ryom on
February 16, 2017
I am going to try to address a few issues I have seen quite a lot in my virtualization career. It is not that you have to take extra care when virtualizing, but your virtual environment will never be better than the foundation you build it on. The reason you do not see that many people fuss about it in non-virtualized environments (anymore). I believe, that resources are in abundance today. Well, they were so ten years ago as well, but since then we have only seen higher and higher specification on server hardware. It was the reason for starting to virtualize. Do not get me wrong – Lots of people care about the performance of their virtual and physical environments. Yet some have not set them self up for a successful virtualization project. Let me elaborate…
The Virtualization Review Editor’s Choice Awards 2016
Posted by Oksana Zybinskaya on
December 26, 2016
The Virtualization Review Editor’s Choice is a selection of the most outstanding virtualization products of 2016. It is based on the opinions and overlooks by the trusted experts in the fields of virtualization and cloud computing. This is not the “best of the best rating”. No criteria were applied to make the list. This is just the collection of individual choices of writers, who deal with the industry daily, so they have pointed out virtualization solutions they found especially interesting and useful.
Storage Replica: Overview
Posted by Anton Kolomyeytsev on
May 11, 2016
Here is an overview dedicated to disaster recovery, more specific, it’s about the DR capabilities of Microsoft Storage Replica – a new feature of Windows Server 2016. It takes a glance on the DR process itself and then brings a few details of the Storage Replica operation, its features and peculiarities. They include: zero data loss, block-level replication, simple deployment and management, guest and host, SMB3 protocol, high security, high performance, consistency groups, user delegation, network constraint, thin provisioning, etc. The post is, basically, an introduction to a series of experiments also listed in the blog. They were conducted in order to check the functionality and performance of Microsoft Storage Replica in different use cases.
Manage It Already
Posted by Jon Toigo on
April 27, 2016
As I review the marketing pitches of many software-defined storage products today, I am concerned by the lack of attention in any of the software stack descriptions to any capabilities whatsoever for managing the underlying hardware infrastructure. This strikes me as a huge oversight.
The truth is that delivering storage services via software — orchestrating and administering the delivery of capacity, data encryption, data protection and other services to the data that are hosted on a software-defined storage volume – is only half of the challenge of storage administration. The other part is maintaining the health and integrity of the gear and the interconnect cabling that provide the all-important physical underlayment of an increasingly virtualized world.
Let’s Get Real About Data Protection and Disaster Recovery
Posted by Jon Toigo on
April 7, 2016
Personally, I am getting rather tired of the dismissive tone adopted by virtualization and cloud vendors when you raise the issue of disaster recovery. We previously discussed the limited scope of virtual systems clustering and failover: active-passive and active-active server clusters with data mirroring is generally inadequate for recovery from interruption events that have a footprint larger than a given equipment rack or subnetwork. Extending mirroring and cluster failover over distances greater than 80 kilometers is a dicey strategy, especially given the impact of latency and jitter on data transport over WAN links, which can create data deltas that can prevent successful application or database recovery altogether.