Posted by Ivan Ischenko on March 8, 2018
Simplify storage management with Microsoft Systems Center VMM (SCVMM) and SMI-S

SMI-S or ‘Storage Management Initiative – Specification’ is a standard of a storage management (surprise!) which gives you a chance to administrate the storage layer using ‘Common Information Model’ and Web-Based Enterprise Management technologies and logic. The main point of SMI-S is to provide a single standard to manage various storage systems from different vendors pretty much in the same way. In this article (?) we will show you how to manage your storage using SCVMM 2016 (Server Center Virtual Machine Manager) through SMI-S, and how this whole thing works in general. We’ll use StarWind Virtual SAN as a reference distributed storage platform, but the primary scope of this document is to cover the subject in general, so any SMI-S compatible storage will work.

Learn More

Posted by Dmytro Khomenko on February 27, 2018
Storage Tiering – the best of both worlds

Before the time when SSDs took their irreplaceable place in the modern datacenter, there was a time of slow, unreliable, fragile, and vacuum filled spinning rust drives. A moment of change divided the community into two groups – the first with dreams of implementing SSDs in their environment, and the second, with SSDs already being part of their infrastructure.
The idea of having your data stored on the associated tier has never been so intriguing. The possibility of granting your mission-critical VM the performance it deserves in the moment of need has never been more appropriate.

Learn More

Posted by Alex Khorolets on July 14, 2017
Windows Server 2016 Core configuration. Part 1: step-by-step installation

This series of articles will guide you through the basic deployment of Microsoft Windows Server 2016 Core version, covering all the steps from an initial installation to the deployment of Hyper-V role and Failover Cluster configuration.

The first and the main thing you need to double-check before installing the Windows Server 2016 Core is whether your hardware meets the system requirements of WS 2016. This also is very important in the process of planning your environment, in order to be sure that you have enough amount of compute resources for running your production workload.

Windows Server installation

Learn More

Posted by Augusto Alvarez on June 19, 2017
Azure Introduces Storage Service Encryption for Managed Disks with No Additional Cost

As we referenced several times, security is one of the main topics for cloud providers looking to guarantee privacy for their customers’ data and information. Microsoft just announced the public availability for Storage Service Encryption (SSE) for Azure Managed Disks, with no additional cost.

Azure Storage Service Encryption

Learn More

Posted by Alex Bykovskyi on June 12, 2017
StarWind Swordfish Provider

Introduction

The fast-paced world of system administration is growing on a large scale and picking up steam all the time. The migration of business to the cloud we can see these days is a good example of the above said. Thus, the continuous pursuit of a tool that would simplify the life of a system administrator constantly stays on the agenda of the IT world. The main goal behind the development of the Storage Management Initiative (SMI) is eliminating the long journey of finding the right solution.

To simplify the life of system administrators to an even greater extent, the StarWind team has decided to additionally complete the feature set by developing the StarWind Swordfish Provider.

StarWind Management console Creating a profile for the Storage Node

Learn More

Posted by Alex Khorolets on April 21, 2017
Supermicro SuperServer E200-8D/E300-8D review

These days, more and more companies need high-quality, reliable and efficient server hardware.  Home labs used by enthusiasts and professionals in the IT sphere for software developing and testing, studying for an IT certification, and configuring virtual environments became popular as well. Small companies are also interested in cheap and compact servers, the production of which is based on a couple of virtual machines or networking applications.

Supermicro company ranks one of the leading positions in server development for a long time. Supermicro products range from the Hi-End clusters to microservers. Recently the company released two compact servers: SuperServer E200-8D and its younger model – SuperServer E300-8D.

Supermicro SuperServers

Learn More

Posted by Jon Toigo on April 4, 2017
Data Management Moves to the Fore. Part 2: Data Management Has Many Moving Parts

In the previous blog, we established that there is a growing need to focus on Capacity Utilization Efficiency in order to “bend the cost curve” in storage.  Just balancing data placement across repositories (Capacity Allocation Efficiency) is insufficient to cope with the impact of data growth and generally poor management.  Only by placing data on infrastructure in a deliberative manner that optimizes data access and storage services and costs, can IT pros possibly cope with the coming data deluge anticipated by industry analysts.

The problem with data management is that it hasn’t been advocated or encouraged by vendors in the storage industry.  Mismanaged data, simply put, drives the need for more capacity – and sells more kit.

COMPONENTS OF A COGNITIVE DATA MANAGEMENT SOLUTION

Learn More

Posted by Jon Toigo on March 28, 2017
Data Management Moves to the Fore. Part 1: Sorting Out the Storage Junk Drawer

Most presentations one hears at industry trade shows and conferences have to do, fundamentally, with Capacity Allocation Efficiency (CAE).  CAE seeks to answer a straightforward question:  Given a storage capacity of x petabytes or y exabytes, how will we divvy up space to workload data in a way that reduces the likelihood of a catastrophic “disk full” error?

Essentially, from a CAE perspective, efficiency involves balancing the volume of bits across physical storage repositories in a way that does not leave one container nearly full while another has mostly unused space.  The reason is simple.  As the volume of data grows and the capacity of media (whether disk or flash) increases, a lot of data – with many users — can find its way into a single repository.  In so doing, access to the data can be impaired (a lot of access requests across a few bus connections can introduce latency).  This, in turn, shows up in slower application performance, whether the workload is a database or a virtual machine.

Survey of 2000 company disk storage envitonments

Learn More

Posted by Andrea Mauro on February 24, 2017
Design a ROBO infrastructure (Part 2): Design areas and technologies

In the previous post, we have explained and described business requirements and constraints in order to support design and implementation decisions suited for mission-critical applications, considering also how risk can affect design decisions.

Now we will match the following technology aspects to satisfy design requirements:

  • Availability
  • Manageability
  • Performance and scaling
  • Recoverability
  • Security
  • Risk and budget management

ROBO Design areas and technologies

Learn More

Posted by Alex Khorolets on February 23, 2017
RAM Disk technology: Performance Comparison

Introduction

Since every computer now has a volatile amount of available storage located in the RAM, when compared to other direct-access memory used for data storage, for example, hard disks, CD-RWs, DVD-RWs and the older drum memory, the amount of time used to read/write the data differs in correspondence to the physical location and/or the medium used for reading/recording (rotation speeds and arm movement) the data.

The implementation of RAM as a storage provides a list of benefits over other conventional devices, due to the fact of the data being read or written in the same amount of time irrespective of the physical location of data inside the volume. Taken into consideration all the information mentioned above, it would be a crime not to take advantage of the provided conditions.

RAM

Learn More