Intel SPDK NVMe-oF Target Performance Tuning. Part 2: Preparing testing environment
Posted by Taras Shved on December 27, 2017
5/5 (3)

Introduction

In the previous article, I’ve described 3 scenarios for testing NVMe-oF performance and skimmed through their hardware and software configuration. Again, what I wanna do, is measure how virtualization influences the NVMe-oF performance (maybe, it doesn’t at all). For this, I’m gonna examine how NVMe-oF performs on a bare metal configuration, and on an infrastructure with Hyper-V and ESXi deployed. In each case, I’ll also evaluate the performance of iSER transport using LIO and SPDK iSCSI. Now that you have the overall understanding of the project, it’s time to move on to configuring our testing environment.

NVMe-oF on bare metal

Learn More

Please rate this

Intel SPDK NVMe over Fabrics [NVMe-oF] Target Performance Tuning. Part 1: Jump into the fire©
Posted by Taras Shved on December 20, 2017
5/5 (1)

There’s a common opinion that the performance in general and IOPS-intensive performance like NVMe over Fabrics is usually lower in virtualized environments due to the hypervisor overhead. Therefore, I’ve decided to run a series of tests to prove or knock down this belief. For this purpose, I’ll have three scenarios for measuring the performance of NVMe over Fabrics in different infrastructures: fist – on a bare metal configuration, second – with Microsoft Hyper-V deployed on the client server, and finally, with ESXi 6.5.

In each case, I’ll also compare the performance of NVMe-oF, NVMe-oF over iSER transport, and SPDK iSCSI.

NVMe over Fabrics on the client server

Learn More

Please rate this

Benchmarking Samsung NVMe SSD 960 EVO M.2
Posted by Taras Shved on March 24, 2017
5/5 (1)

Everyone knows that, currently, the SSDs are one of the best storage devices that allow you to upgrade your architecture and significantly accelerate the performance of the computer. SSD accelerates the loading speed of your PC, applications opening and files searching speed, and generally increases the performance of your system. Despite the fact that solid-state drives are more expensive than standard hard drives, the performance improvement can hardly be overlooked.

The modern market is represented by a variety of storage devices that differ depending on the volume, interface, memory type, and vendor. The SATA SSDs are replaced by PCIe NVMe SSDs that deliver an increase in performance by connecting directly to the PCIe bus. A few months ago, Samsung announced the release of SSD 960 PRO and SSD 960 EVO NVMe,  which will be discussed in this post. As well as 950 Pro, which was released last year,  Samsung 960 Pro and 960 EVO are PCIe 3.0 x4 drives that utilize the latest version of the NVMe protocol for data transfer, designed to reduce delays, and utilize flash memory with maximum efficiency. Therefore, Samsung 960 EVO delivers performance close to 960 PRO, but at a much more affordable price.

Samsung SSD 960 EVO

Learn More

Please rate this

Eliminating Blue Screen or Errors during failover
Posted by Taras Shved on March 15, 2017
4.67/5 (3)

Introduction

The reason for writing this post was a recent case from one of our customers, who ran into an issue when their SAN switch failed. The problem was that their VMs were generating an enormous amount of errors that were caused by the switching of active paths at the time of failover.

Local Machine System Current Controller Set Services Disk

Learn More

Please rate this

Fibre Channel: Concepts and Configuration
Posted by Taras Shved on March 3, 2017
5/5 (3)

Introduction

This article is intended to introduce you to the main concepts and features of Fibre Channel (FC), the high-speed network technology and a relevant family of standards (protocols) for storage networking that was standardized in 1994.

FC is one of the first technologies used for connecting data storage to servers, for example, in Storage Area Networks (SAN). On the physical layer, it is typically built with optical fiber cables. There are three major Fibre Channel topologies: Point-to-Point, Arbitrated Loop, and Switched Fabric. FC is available at 1, 2, 4, 8, 10, 16, 32 and 128 Gbit/s speeds.

Inverted pyramid of doom

Learn More

Please rate this

Storage Spaces Direct: Enabling S2D work with unsupported device types (BusType = NVMe, RAID, Fibre Channel)
Posted by Taras Shved on February 10, 2017
5/5 (6)

Introduction

Microsoft Storage Spaces Direct is a new storage feature introduced in Windows Server 2016 Datacenter, which significantly extends the Software-Defined Storage stack in Windows Server product family and allows users building highly available storage systems using directly attached drives.

Storage Spaces Direct, or S2D, simplifies the deployment and management of Software-Defined Storage systems and allows using more disk devices classes like SATA and NVMe drives. Previously, it was not possible to use these types of storage with clustered Storage Spaces with shared disks.

Storage Spaces Direct can use drives that are locally attached to nodes in a cluster or disks that are attached to nodes using enclosure. It aggregates all the disks into a single Storage Pool and enables the creation of virtual disks on top.

RAID-Configuration

Learn More

Please rate this