Integrating StarWind Virtual Tape Library (VTL) with Microsoft System Center Data Protection Manager
Posted by Dmytro Khomenko on
April 28, 2017
The reason for writing this article was the goal of eliminating any possible confusion in the process of configuring the StarWind Virtual Tape Library in pair with the Microsoft System Center Data Protection Manager.
The integration of SCDPM provides a benefit of consolidating the view of alerts across all your DPM 2016 servers. Alerts are grouped by disk or tape, data source, protection group and replica volumes, which simplifies troubleshooting. The grouping functionality is further completed with the console capable of separating issues that only affect one data source from problems that impact multiple data sources. Alerts are also separated into backup failure or infrastructure problems.
Storage HA on the Cheap: Fixing Synology DiskStation flaky Performance with StarWind Free. Part 3 (Failover Duration)
Posted by Vladislav Karaiev on
February 17, 2017
We are continuing our set of articles dedicated to Synology’s DS916+ mid-range NAS units. Remember we don’t dispute the fact that Synology is capable of delivering a great set of NAS features. Instead of this, we are conducting a number of tests on a pair of DS916+ units to define if they can be utilized as a general-use primary production storage. In Part 1 we have tested the performance of DS916+ in different configurations and determined how to significantly increase the performance of a “dual” DS916+ setup by replacing the native Synology DSM HA Cluster with StarWind Virtual SAN Free.
Storage HA on the Cheap: Fixing Synology DiskStation flaky Performance with StarWind Free. Part 2 (Log-Structured File System)
Posted by Alex Bykovskyi on
February 13, 2017
In this article, we are going to continue testing Synology DS916+ with StarWind Virtual SAN. Our main goal today is to improve the performance of Synology boxes specifically on random patterns. Randoms were chosen for a reason. SQL and OLTP workloads tend to cause huge stress, especially, to spindle arrays, generating a heavily randomized I/O. Patterns we are choosing for today’s benchmark are common for such environments. There are different approaches, which can handle these workload types, such as caching and tiering. Our approach is to build environment with StarWind Log-Structured File System. LSFS was created exactly for this type of environments to improve the performance.
We will compare the results we receive to the ones from Part 1 of our research.
Storage HA on the Cheap: Fixing Synology DiskStation flaky Performance with StarWind Free. Part 1 (Architecture)
Posted by Vladislav Karaiev on
January 4, 2017
DiskStation DS916+ is a further improvement of DS415+ model. Storage capacity in DS916+ can be scaled using DX513 expansion units, making a total of nine 3.5 disk bays. Given the relatively small form factor and impressive capacity potential, such configuration may become a great solution for small businesses and enthusiasts.
The latest updates in vSphere 6.5 and VSAN 6.5
Posted by Askar Kopbayev on
October 18, 2016
This day has come – vSphere 6.5 has been just announced. As many of you I have been waiting for the presentation of new vSphere during VMworld event in the USA, but I guess VMware preferred to use vSphere 6.5 as a treat for those who were in doubt whether to attend VMworld Europe or not after all VMworld US were made available online to everyone; or perhaps VMware hasn’t decided what features should be included into the GA release.
In this post, I will try to cover all new features of vSphere 6.5 and VSAN 6.5, but if I missed something feel free to let me know by leaving a comment.
To be honest, there is so much to talk about and some of the new features require separate posts to be explained properly. Therefore, please don’t expect detailed review of the every single feature.. This is more ‘What’s new in vSphere 6.5 and VSAN 6.5′ overview, but in the future posts I will be talking about some of the most interesting improvements and enhancements in detail.
Don’t Fear but Respect Redirected IO with Shared VHDX
Posted by Didier Van Hoye on
August 25, 2016
When we got Shared VHDX in Windows Server 2012 R2 we were quite pleased as it opened up the road to guest clustering (Failover clustering in virtual machines) without needing to break through the virtualization layer with iSCSI or virtual Fibre Channel (vFC).
First of all, you need to be aware of the limits of using a shared VHDX in Windows Server 2012 R2.
- You cannot perform storage live migration
- You cannot resize the VHDX online
- You cannot do host based backups (i.e. you need to do in guest backups)
- No support for checkpoints
- No support for Hyper-V Replica
If you cannot live with these, that’s a good indicator this is not for you. But if you can, you should also take care of the potential redirected IO impact that can and will occur. This doesn’t mean it won’t work for you, but you need to know about it, design and build for it and test it realistically for your real life workloads.
How to Configure Storage Replication using Windows Server 2016? – Part 2
Posted by Charbel Nemnom on
February 3, 2016
Warning: This article is written with information related to Windows Server 2016 Technical Preview 4.
In part one of this multi part blog on How to Configure Storage Replication in Windows Server 2016, we covered an introduction into Storage Replica which is a new feature introduced in Windows Server 2016, and we covered step by step the implementation of Windows Volume Replication (Server-to-server). In this follow up post, we are going to cover the implementation of volume replication with stretch cluster. This type of cluster features uses Asymmetric storage, two sites, two sets of shared storage and uses volume replication to ensure that data is available to all nodes in the cluster.
ISCSI: LACP vs. MPIO
Posted by Anton Kolomyeytsev on
March 31, 2015
Here is a comparison of two technologies with similar task but different methods of accomplishing it – Link Aggregation Control Protocol (LACP) and Multipath I/O (MPIO). Both are aimed at providing higher throughput when one connection can’t handle the task. To achieve that, LACP bundles several physical ports into a single logical channel. MPIO, on the other hand, utilizes more than one physical path, even if the working application does not support more than one connection. Both technologies seem to be equally effective at first glance, but further study confirms that one of them is better at achieving its goal. The post is practical, so expect detailed research with screenshots and complete analysis of the technologies in a test case.
Storage Replica: “Shared Nothing” Hyper-V Guest VM Cluster
Posted by Anton Kolomyeytsev on
October 30, 2014
This post is about Microsoft Storage Replica, a new solution introduced by Microsoft in Windows Server 2016. Basically, it enables replication between two servers, clusters or inside a cluster, also being capable of copying data between volumes on the same server. Storage Replica is often utilized for Disaster Recovery, allowing the user to replicate data to a remote site, thus being able to recover from complete physical failure on the main location. The post is dedicated to building a “shared nothing” cluster. It is an experimental part of a series and features a “Shared Nothing” Hyper-V guest VM cluster. As always, there is a detailed instruction as to how to create the subject setup and there are results for everyone to check.