Dedupe or Not Dedupe: That is the Question!
Posted by Boris Yurchenko on February 22, 2018

When planning or improving an IT infrastructure, one of the most difficult challenges is defining the correct approach to developing it so that it requires as little changes at further scaling up as possible. Keeping this in mind is really important, as at some point almost all environments reach the state where the necessity of growth becomes vivid. While hyperconvergence is popular these days, managing large setups with less resources involved becomes quite easy.

Today I will deal with data deduplication analysis. Data deduplication is a technique that helps to avoid storing repeated identical data blocks. Basically, during the deduplication process, unique data blocks, or byte patterns, are identified and written to the storage array after being analyzed. While such analysis is a continuous process, other data blocks are processed and compared to the initially stored patterns. If a match is found, instead of storing a data block, the system stores a little reference to the original data block. In case of small environments, this is not crucial mostly, yet for those with dozens or hundreds of VMs, the same patterns can be met numerous times. Thus, due to the advanced algorithms used, data deduplication allows storing more information on the same physical storage volume compared to traditional data storage methods. This can be achieved in several ways, one of which is StarWind LSFS (Log Structured File System), which offers inline deduplication of data on LSFS-powered virtual storage devices.

(more…)

Using a VEEAM off-host backup proxy server for backing up Windows Server 2016 Hyper-V Hosts
Posted by Didier Van Hoye on December 5, 2017

Many years ago, I wrote a white paper on how to configure a VEEAM Off-host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster that uses a hardware VSS provider with VEEAM Backup & Replication 7.0.  It has aged well and you can still use it as a guide to set it all up. But in this article, I revisit the use of a hardware VSS provider dedicated specifically to some changes in Windows Server 2016 and its use by Veeam Backup & Replication v9.5 or later. The information here is valid for any good hardware VSS provider like the one StarWind Virtual SAN provides (see Do I need StarWind Hardware VSS provider?)

VSS list of events

(more…)

Storage HA on the Cheap: Fixing Synology DiskStation flaky Performance with StarWind Free. Part 2 (Log-Structured File System)
Posted by Alex Bykovskyi on February 13, 2017

Introduction

In this article, we are going to continue testing Synology DS916+ with StarWind Virtual SAN. Our main goal today is to improve the performance of Synology boxes specifically on random patterns. Randoms were chosen for a reason. SQL and OLTP workloads tend to cause huge stress, especially, to spindle arrays, generating a heavily randomized I/O. Patterns we are choosing for today’s benchmark are common for such environments. There are different approaches, which can handle these workload types, such as caching and tiering. Our approach is to build environment with StarWind Log-Structured File System. LSFS was created exactly for this type of environments to improve the performance.

We will compare the results we receive to the ones from Part 1 of our research.

Synology DS916+ StarWind

(more…)

ReFS: Performance
Posted by Anton Kolomyeytsev on June 23, 2016

ReFS (Resilient File System – https://msdn.microsoft.com/en-us/library/windows/desktop/hh848060%28v=vs.85%29.aspx) is a Microsoft file system, which ensures data integrity by means of resiliency to corruption (irrespective of software or hardware failures), increases data availability and scales to large data sets across various workloads. Its data protection feature is represented by the FileIntegrity option, which is responsible for file scanning and repair processes.

ReFS
(more…)

ReFS: Log-Structured
Posted by Anton Kolomyeytsev on April 12, 2016

Here is a part of a series about Microsoft Resilient File System, first introduced in Windows Server 2012. It shows an experiment, conducted by StarWind engineers, dedicated to seeing the ReFS in action. This part is mostly about the FileIntegrity feature in the file system, its theoretical application and practical performance under real virtualization workload. The feature is responsible for data protection in ReFS, basically the reason for “resilient” in its name. It’s goal is avoidance of the common errors that typically lead to data loss. Theoretically, ReFS can detect and correct any data corruption without disturbing the user or disrupting production process.

Device manager

(more…)

Log-Structured File Systems: Overview
Posted by Anton Kolomyeytsev on October 26, 2015

Log-Structured File System is obviously effective, but not for everyone. As the “benefits vs. drawbacks” list shows, Log-Structuring is oriented on virtualization workload with lots of random writes, where it performs like a marvel. It won’t work out as a common file system for everyday tasks. Check out this overview and see what LSFS is all about.

LSFS

(more…)