IMPORTANT NOTICE

The hyper-v.io blog was acquired by StarWind Software, Inc. on March 1st, 2023.

We are currently reviewing the content of the blog, but please note that any opinions expressed before the effective date of the acquisition are solely those of the original owner(s). We will not provide any comments or opinions on the previous content. You are welcome to post comments on the original posts, but we are not obligated to respond to your inquiries.

Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 4: testing NFS on Linux

In the previous article, I’ve measured the performance of NFS vs iSCSI to find out which network protocol is faster as a storage for virtual machines on VMware ESXi. Well, iSCSI beats NFS under all testing patterns. Additionally, I’ve evaluated and compared the performance of NFS client connected to Linux (Ubuntu Server 17.10 distributive) and to Windows Server 2016. According to the results, NFS server performance on Linux was higher than that on Windows.


Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 3: test results

In the previous parts, I’ve shown you the process of configuring NFS and iSCSI protocols between our servers. So now, we’ve got everything ready for running our performance tests and finally finding out which network protocol is faster as a storage for virtual machines on VMware ESXi: NFS or iSCSI.

So to benchmark the iSCSI performance, I’ve created the StarWind device on the server and connected it to the ESXi host over the iSCSI protocol. As to OS for running further tests, I’ve used Windows Server 2016.


How Can I Replace a Failed Physical Disk on Storage Spaces Direct in Windows Server 2016?

So, we all know about Microsoft’s Storage Spaces Direct (S2D to put it simple) by now. It’s the feature introduced in Microsoft Server 2016 (Datacenter Edition) that pools together server’s storage allowing to build…that’s right: highly available and easily scalable software-defined storage systems. In this article, I’m gonna talk about not as much about its fault-tolerance characteristics themselves, but some hands-on experience, namely: how to replace a failed disk.


Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 2: configuring iSCSI

Cheers friends, not so long ago we’ve run through the process of configuring an NFS disk and connecting it to the VMware host. What we’re gonna do is measure and compare the performance of NFS and iSCSI network protocols to see which one is more suitable for building a virtualized infrastructure. So, in this part, we’ll create an iSCSI device and connect it to the VMware ESXi host.


Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 1: configuring NFS

Hi there! There have been pretty much debates over which network protocol is better: NFS or iSCSI when building a virtualization infrastructure. Some experts argue that iSCSI gives better performance and reliability due to block-based storage approach while others go in favour of NFS citing management simplicity, large data stores and the availability of cost-saving features like data deduplication on some NFS arrays.

Anyway, we’re not here for polemics but to see which protocol is better for your production environment, meaning, which one really provides higher performance for your mission-critical applications. That’s what we all want, right?

Just to make it clear, the whole project will be divided into three parts: configuring NFS, configuring iSCSI, and the testing itself.

So, first things first. In this first chapter, I’ll guide you through the process of configuring and preparing the NFS protocol for further testing.

So, as Michael Buffer uses to say: “Let’s get ready to rumble!”.


Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 1: configuring NFS

Hi there! There have been pretty much debates over which network protocol is better: NFS or iSCSI when building a virtualization infrastructure. Some experts argue that iSCSI gives better performance and reliability due to block-based storage approach while others go in favour of NFS citing management simplicity, large data stores and the availability of cost-saving features like data deduplication on some NFS arrays.

Anyway, we’re not here for polemics but to see which protocol is better for your production environment, meaning, which one really provides higher performance for your mission-critical applications. That’s what we all want, right?

Just to make it clear, the whole project will be divided into three parts: configuring NFS, configuring iSCSI, and the testing itself.

So, first things first. In this first chapter, I’ll guide you through the process of configuring and preparing the NFS protocol for further testing.

So, as Michael Buffer uses to say: “Let’s get ready to rumble!”.


Windows Server 2016 Hyper-V VM Licensing

What’s new in Windows Server 2016

Windows Server 2016 differentiated Standard and Datacenter licenses. The functions available to Datacenter users are mentioned below:

  • New storage functions (including Storage Spaces Direct and Storage Replica)
  • New shielded Virtual Machines
  • The new network stack

The features enabled in new editions are designed for virtual environments. It should also be noted that Windows Server 2016 supports Docker-powered Windows Server containers

Fortunately, Windows containers are available without any additional licensing and do not have any restrictions regarding the number of running instances. The storages are expected to be available on Pro and Enterprise Windows 10 version, starting from the 1607 (Anniversary Update) one.


Configuring Time Synchronization for all Computers in a Windows domain

Microsoft operating systems and server applications are becoming increasingly dependent on proper time synchronization. A skewed system clock can affect your ability to log on, can cause problems with mail flow in Exchange, and be the source of a great many difficult-to-locate problems. To compound matters, the default method of handling time synchronization within a Windows network isn’t exactly reliable or even predictable. If a Hyper-V host’s clock becomes out of sync, it usually affects all of its virtual machines, sometimes catastrophically. Fortunately, it doesn’t take much work to get everything in sync.

Pick a Computer to Server as the Authoritative Internal Time Source

The first thing you want to do is decide what machine you want to serve as the authority of time within your domain. In most cases, I choose the domain controller that holds the PDC emulator role. According to Microsoft’s documentation, that’s supposed to be the highest authority on the matter anyway, although it doesn’t seem to work out that way in practice. The machine that you choose will be regularly consulting Internet sources, so if you’re in a high-security facility, you might consider delegating this role to a different computer. You could have multiple machines serving as authoritative time sources, but more than one per site generally is unnecessary. You could also have one machine pull external time and have your PDC emulator use that as its source while still serving as the authoritative server for the rest of the computers in your domain.