September, 20 1pm PT
Live StarWind presentation
Meet industry-first
software-defined NVMe
over Fabrics
Target and Initiator
for Microsoft Hyper-V and
VMware vSphere
Posted by Boris Yurchenko on February 22, 2018
Dedupe or Not Dedupe: That is the Question!

Today I will deal with data deduplication analysis. Data deduplication is a technique that helps to avoid storing repeated identical data blocks. Basically, during the deduplication process, unique data blocks, or byte patterns, are identified and written to the storage array after being analyzed. While such analysis is a continuous process, other data blocks are processed and compared to the initially stored patterns. If a match is found, instead of storing a data block, the system stores a little reference to the original data block. In case of small environments, this is not crucial mostly, yet for those with dozens or hundreds of VMs, the same patterns can be met numerous times. Thus, due to the advanced algorithms used, data deduplication allows storing more information on the same physical storage volume compared to traditional data storage methods. This can be achieved in several ways, one of which is StarWind LSFS (Log Structured File System), which offers inline deduplication of data on LSFS-powered virtual storage devices.

Learn More

Posted by Dmytro Khomenko on April 28, 2017
Integrating StarWind Virtual Tape Library (VTL) with Microsoft System Center Data Protection Manager

The reason for writing this article was the goal of eliminating any possible confusion in the process of configuring the StarWind Virtual Tape Library in pair with the Microsoft System Center Data Protection Manager. The integration of SCDPM provides a benefit of consolidating the view of alerts across all your DPM 2016 servers. Alerts are grouped by disk or tape, data source, protection group and replica volumes, which simplifies troubleshooting. The grouping functionality is further completed with the console capable of separating issues that only affect one data source from problems that impact multiple data sources. Alerts are also separated into backup failure or infrastructure problems.

Learn More

Posted by Didier Van Hoye on September 12, 2016
Hyper-V backup challenges Windows Server 2016 needs to address

Introduction

Personally I have been very successful at providing good backup designs for Hyper-V in both small to larger environments using budgets that range in between “make due” to “well-funded”.  How does one achieve this? Two factors. The first factor is knowing the strengths and limitations of the various Hyper-V versions when you design the backup solution. Bar the ever better scalability, performance and capabilities with each new version of Hyper-V, the improvements in back up from 2012 to 2012 R2 for example were a prime motivator to upgrade. The second factor of success is due to the fact that I demand a mandate and control over the infrastructure stack to do so. In many case you are not that lucky and can’t change much in already existing environments. Sometimes not even in new environments when the gear, solutions have already been chosen, purchased and the design is deployed before you get involved.

Windows Server 2008 (R2) - 2012 Hyper-V Backup
Learn More