Posted by Paulsen Muzari on March 14, 2018
Whip your Hyperconverged Failover Cluster into shape automatically and with no downtime using Microsoft’s Cluster Aware Updating

Some admins prefer the Cluster updates to be done automatically. To do so, Microsoft designed a feature to facilitate patching of Windows Servers from 2012 to 2016 that are configured in a failover cluster. Cluster Aware Updating (CAU) does this automatically, thereby avoiding service disruption for clustered roles.

In this article, we are going to take a look into how we can achieve this assuming that Cluster is built with hyperconverged scenario and StarWind Virtual SAN used as a shared storage. Before going in the steps to set the CAU, we will investigate this scenario.

Learn More

Posted by Andrea Mauro on March 13, 2018
CLI vs. GUI for VMware Admins

CLI vs. GUI for VMware Admins

The term User Interface (UI) is used for specifying how a user interacts with a specific device, or software. CLI and GUI are two different types of possible user interfaces.

Let’s analyze those different approaches and the pro and cons of them, using the VMware vSphere environment as an example.

Learn More

Posted by Ivan Ischenko on March 8, 2018
Simplify storage management with Microsoft Systems Center VMM (SCVMM) and SMI-S

Manage StarWind VSAN storage from SCVMM using SMI-S - new logical volume for our StarWind device

SMI-S or ‘Storage Management Initiative – Specification’ is a standard of a storage management (surprise!) which gives you a chance to administrate the storage layer using ‘Common Information Model’ and Web-Based Enterprise Management technologies and logic. The main point of SMI-S is to provide a single standard to manage various storage systems from different vendors pretty much in the same way. In this article (?) we will show you how to manage your storage using SCVMM 2016 (Server Center Virtual Machine Manager) through SMI-S, and how this whole thing works in general. We’ll use StarWind Virtual SAN as a reference distributed storage platform, but the primary scope of this document is to cover the subject in general, so any SMI-S compatible storage will work.

Learn More

Posted by Ivan Ischenko on March 7, 2018
Creating ESXi VMs on the Windows based NFS share

Creating ESXi VMs on the Windows based NFS share

Many words were said about NFS (Network File System), but what exactly NFS can give us? In general, NFS used as the ISO library or just simple network file share with easy access from any Windows or Linux based machine. However, starts from NFS 3.0 protocol can give us the good performance and can be as the shared storage for ESXi or any Linux based Hypervisors. In this article, I will create the NFS share on the Windows Server 2016 and then mount NFS share on the ESXi 6.5 and will create the VM on it.

Learn More

Posted by Vladan Seget on March 1, 2018
How to enable Active Directory Recycle Bin in Windows Server 2016

Before we dive into how to enable Active Directory Recycle Bin in Windows Server 2016, we will first explain what it is and when Microsoft introduced this feature.

Active Directory Recycle Bin simply allows you to restore deleted objects from Active Directory. It can be a user account, computer account or a whole Organizational Unit (OU). Who did not accidentally delete an AD object in his career?

 

Learn More

Posted by Karim Buzdar on February 28, 2018
Crashed Microsoft Exchange 2013 Database? No sweat. Learn how to recover it with ease

Crashed-Microsoft-Exchange-2013-Database--No-sweat.-Learn-how-to-recover-it-with-ease

Companies often store critical client mailbox data on an Exchange server database. The Exchange database is a warehouse of critical mailbox information such as contacts, notes, calendar items and emails of thousands of users. One of the most serious issues companies can face is the corruption of the Microsoft Exchange 2013 database file leading to unavailability of important data for the client. The Microsoft Exchange 2013 database can become vulnerable to crashes due to unavoidable hardware issues, software malfunctions, system freezes, server or boot failures, accidental shutdowns or any unforeseen circumstances. Since the last thing a company wants is to endanger business goals such as data availability during disasters, the first step is to make efforts to recover the damaged file.

Learn More

Posted by Dmytro Khomenko on February 27, 2018
Storage Tiering – the best of both worlds

breaking down mixed storage environment into multiple tiers 1-3

Before the time when SSDs took their irreplaceable place in the modern datacenter, there was a time of slow, unreliable, fragile, and vacuum filled spinning rust drives. A moment of change divided the community into two groups – the first with dreams of implementing SSDs in their environment, and the second, with SSDs already being part of their infrastructure.

The idea of having your data stored on the associated tier has never been so intriguing. The possibility of granting your mission-critical VM the performance it deserves in the moment of need has never been more appropriate.

Learn More

Posted by Boris Yurchenko on February 22, 2018
Dedupe or Not Dedupe: That is the Question!

When planning or improving an IT infrastructure, one of the most difficult challenges is defining the correct approach to developing it so that it requires as little changes at further scaling up as possible. Keeping this in mind is really important, as at some point almost all environments reach the state where the necessity of growth becomes vivid. While hyperconvergence is popular these days, managing large setups with less resources involved becomes quite easy.

Today I will deal with data deduplication analysis. Data deduplication is a technique that helps to avoid storing repeated identical data blocks. Basically, during the deduplication process, unique data blocks, or byte patterns, are identified and written to the storage array after being analyzed. While such analysis is a continuous process, other data blocks are processed and compared to the initially stored patterns. If a match is found, instead of storing a data block, the system stores a little reference to the original data block. In case of small environments, this is not crucial mostly, yet for those with dozens or hundreds of VMs, the same patterns can be met numerous times. Thus, due to the advanced algorithms used, data deduplication allows storing more information on the same physical storage volume compared to traditional data storage methods. This can be achieved in several ways, one of which is StarWind LSFS (Log Structured File System), which offers inline deduplication of data on LSFS-powered virtual storage devices.

Learn More

Posted by Romain Serre on February 21, 2018
Manage backup of physical machine from Veeam Backup & Replication Update 3

Veeam Agent Management

Veeam has released in December the update 3 of Veeam Backup & Replication which brings the central management of Veeam Agent for Windows / Linux. Thanks to this update you can now manage backup of physical machines or cloud instances from a single pane. In this topic, we’ll see how to manage a physical machine from the console of Veeam Backup & Replication.

Learn More

Posted by Boris Yurchenko on February 20, 2018
Don’t break your fingers with hundreds of clicks – automate Windows iSCSI connections

If you have a single environment with only several iSCSI targets discovered from a couple of target portals, messing with automation may not be worth it. Yet, if you have multiple environments with a bunch of portals and targets that need to be discovered and connected, and all of them are more or less similar in terms of configuration, you might find your resort in automating the whole process.

I hope to post some other automation things here, so tune in and check the StarWind blog from time to time.

Learn More