Posted by Jon Toigo on April 27, 2016
Manage It Already

As I review the marketing pitches of many software-defined storage products today, I am concerned by the lack of attention in any of the software stack descriptions to any capabilities whatsoever for managing the underlying hardware infrastructure. This strikes me as a huge oversight.

The truth is that delivering storage services via software — orchestrating and administering the delivery of capacity, data encryption, data protection and other services to the data that are hosted on a software-defined storage volume – is only half of the challenge of storage administration. The other part is maintaining the health and integrity of the gear and the interconnect cabling that provide the all-important physical underlayment of an increasingly virtualized world.

Learn More

Posted by Charbel Nemnom on April 26, 2016
Getting Started with Azure Resource Manager and Azure Deployment – Part I

Applications that are deployed in Microsoft Azure often comprise different but related cloud resources, such as virtual machines, web applications, SQL databases, virtual networks among others. Before the introduction of Azure Resource Manager (Azure V2), it was necessary to define and provision these resources imperatively. However, Azure Resource Manager gives you the ability to define and provision these resources with their configuration and associated parameters declaratively in a JavaScript Object Notation (JSON) template file, known as an Azure Resource Manager template.

Learn More

Posted by Oksana Zybinskaya on April 25, 2016
SanDisk X400 SSD Review

SanDisk is one of few companies currently offering 1TB of storage in a single-sided M.2 card – its product X400 SSD. X400 also comes in a 2.5″ 7mm-height form factor, but the M.2 configuration is the main selling point of this line. 1TB M.2 X400 card allows getting the most out of the ultra-thin notebooks in terms of storage, without sacrificing performance or battery life.

Learn More

Posted by Mike Preston on April 25, 2016
5 tips to help you explore the world of PowerShell scripting

In 2006 Windows Administrators got their first glimpse into what the world of PowerShell scripting might look like when PowerShell, which was then known as Monad was released under beta conditions to the world. 10 years later we are now into our 5th iteration of the scripting language and have seen a thriving ecosystem form around the Verb-Noun style of automation. PowerShell is a powerful tool and can be an amazing time-saver to for any Windows administrator to know. That said, as with any scripting/programming languages getting started can be a little daunting, especially if you have had no scripting experience to fall back on. Below we will take a look at 5 tips that can save you both time and energy when writing your PowerShell scripts.

Learn More

Posted by Romain Serre on April 21, 2016
Extend Active Directory to Microsoft Azure

Extend Active Directory to Microsoft Azure is a common scenario when you implement hybrid cloud. For example, protected VM with Azure Site Recovery may need access to Active Directory even if On-Premise datacenter is unreachable. You can also extend your Active Directory to Azure when you use production workloads in Azure VMs to avoid to implement a new forest or to avoid to use the VPN connection for all Active Directory workloads. In this topic, we will see how to extend the Active Directory to Microsoft Azure.

Learn More

Posted by Oksana Zybinskaya on April 14, 2016
OMS alerting is now generally available

Microsoft Operations Management Suite alerting has moved from preview mode to generally available status.

Learn More

Posted by Oksana Zybinskaya on April 12, 2016
Google, Rackspace to together unfurl DIY Power9 server designs

Google and Rackspace cooperate over creating a new server configuration based on IBM Power9 processors. The design is expected to be shared as part of the Open Compute Project. The hardware set will include 48V Open Compute racks by Google and Facebook collaborative development.

Learn More

Posted by Anton Kolomyeytsev on April 12, 2016
ReFS: Log-Structured

Here is a part of a series about Microsoft Resilient File System, first introduced in Windows Server 2012. It shows an experiment, conducted by StarWind engineers, dedicated to seeing the ReFS in action. This part is mostly about the FileIntegrity feature in the file system, its theoretical application and practical performance under real virtualization workload. The feature is responsible for data protection in ReFS, basically the reason for “resilient” in its name. It’s goal is avoidance of the common errors that typically lead to data loss. Theoretically, ReFS can detect and correct any data corruption without disturbing the user or disrupting production process.

Learn More

Posted by Anton Kolomyeytsev on April 9, 2016
ReFS: Overview

This is a short overview of Microsoft Resilient File System, or ReFS. It introduces the subject and gives a short insight into its main characteristics and theoretical use. It is a part of a series of posts dedicated to ReFS and is, basically, an introduction to the practical posts. All the experiments that show how ReFS really performs, are also listed in the blog. ReFS seems to be a great replacement for the NTFS and its resilience is most convenient for cases, when data loss is critically unacceptable. The file system cooperates with Microsoft Storage Spaces Direct in order to perform automatic corruption repairs, without any attention of the user.

Learn More

Posted by Jon Toigo on April 7, 2016
Let’s Get Real About Data Protection and Disaster Recovery

Personally, I am getting rather tired of the dismissive tone adopted by virtualization and cloud vendors when you raise the issue of disaster recovery.  We previously discussed the limited scope of virtual systems clustering and failover:  active-passive and active-active server clusters with data mirroring is generally inadequate for recovery from interruption events that have a footprint larger than a given equipment rack or subnetwork.  Extending mirroring and cluster failover over distances greater than 80 kilometers is a dicey strategy, especially given the impact of latency and jitter on data transport over WAN links, which can create data deltas that can prevent successful application or database recovery altogether.

Learn More

[[$popup18112021]]