MENU
StarWind Maintenance Mode Overview
Posted by Ivan Talaichuk on October 19, 2017
No ratings yet.

Howdy, folks! I would like to start my tale at a little backstory regarding usefulness which the “maintenance mode” brings to us. And in order to do that, I’ll start from the times when updates have led to the downtime for production.

That’s not a secret for anyone that any production environment sometimes needs to be maintained. It could either be a software update or a hardware reconfiguration. To do this, the administrator should stop the production server for a certain period of time, and this may affect the reliability of the production environment. For example, the fault tolerance level can be decreased, as well as the performance. This is especially critical for small infrastructures which consist of 2 nodes.

So, let’s take a closer look at StarWind maintenance mode and what it delivers to us. First of all, it eliminates the downtimes caused by the planned nodes shutdowns and thus, allows keeping nodes in the pre-synchronized state so that synchronization resumption would not be needed. As a result, the system doesn’t experience any performance and availability degradation.

Enable the maintenance mode on HAimage

(more…)

Please rate this

Cost and License considerations between Always On Availability Groups and Always On Basic Availability Groups
Posted by Shashank Singh on October 17, 2017
5/5 (1)

Windows Server edition considerations

With Windows Server 2012 and above, Standard Edition now has full support for clustering, not just simple 2-node active/passive clusters, but fully configured clustering support.  Before Windows Server 2012, only Windows Server Enterprise Edition could support Windows Server Failover Clustering (WSFC). Starting from Windows Server 2012, clustering got a huge licensing cost reduction.

The cost of Windows Server 2012 Standard is almost the same as that of Windows Server 2008 R2 Standard, but Windows Server 2012 Datacenter Edition has almost 26% price increase. There is no difference throughout feature support between Windows Server 2012 Standard and Datacenter edition; the major difference is that Standard only supports hosting of 2 virtual machines (by default), while in Datacenter this is unlimited. You can host more than 2 VMs on Standard, but that will imply an extra cost.

BAG vs AG

(more…)

Please rate this

SMB Direct in a Windows Server 2016 Virtual Machine Experiment
Posted by Didier Van Hoye on October 12, 2017
5/5 (1)

Introduction

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Traditionally, in order for SMB Direct to work, the SMB stack needs direct access to the RDMA NICs. This means that right up to Windows Server 2012 R2 we had SMB Direct on running on physical NICs on the host or the parent partition/management OS. You could not have RDMA exposed on a vNIC or even on a host native NIC team (LBFO). SMB Direct was also not compatible with SR-IOV. That was and still is, for that OS version common knowledge and a design consideration. With Windows Server 2016, things changed. You can now have RDMA exposed on a vSwitch and on management OS vNICs. Even better, the new Switch Embedded Teaming (SET) allows for RDMA to be exposed in the same way on top of a vSwitch. SET is an important technology in this as RDMA is still not exposed on a native Windows team (LBFO).

Mellanox InfiniBand Router

(more…)

Please rate this

Back to Enterprise Storage
Posted by Jon Toigo on October 11, 2017
No ratings yet.

An under-reported trend in storage these days is the mounting dissatisfaction with server-centric storage infrastructure as conceived by proprietary server hypervisor vendors and implemented as exclusive software-defined storage stacks.  A few years ago, the hypervisor vendors seized on consumer anger around overpriced “value-add” storage arrays to insert a “new” modality of storage, so-called software-defined storage, into the IT lexicon.  Touted as a solution for everything that ailed storage – and as a way to improve virtual machine performance in the process – SDS and hyper-converged infrastructure did rather well in the market.  However, the downside of creating silo’ed storage behind server hosts was that storage efficiency declined by 10 percent or more on an enterprise-wide basis; companies were realizing less bang for the buck with software-defined storage than with the enterprise storage platforms they were replacing.

Shared and Silo'ed storages' components

(more…)

Please rate this

The dark side of converged storage networks
Posted by Andrea Mauro on October 10, 2017
No ratings yet.

Introduction

The fabric of SAN (means Storage Area Network) with Fibre Channel solutions have always been a dedicated network, with dedicated components (like FC switches).

But, starting with iSCSI and FCoE protocols, the storage fabric could now be shared with the traditional network infrastructure, because at least level 1 and 2 have a common Ethernet layer (for iSCSI also layer 3 and 4 are the same of TCP/IP networks).

Hosts (the initiators) in a converged network use typically Converged Network Adapters (CNAs) that provide both Ethernet and storage functions (usually FCoE and iSCSI). The results are that LAN and SAN are shared on the same physical network:

LAN and SAN shared on the same physical network

(more…)

Please rate this

[Azure Automation] Migrate your scripts to Azure – Part 2
Posted by Florent Appointaire on October 5, 2017
No ratings yet.

I recently looking for Azure Automation, from top to bottom. It’s why, in the next 2 articles, we will see how to use this tool, from A to Z:

  • [Azure Automation] Interface discovery – Part 1
  • [Azure Automation] Migrate your scripts to Azure – Part 2 (this post)

Today, we will see how to migrate your On-Premises scripts, to Azure Automation.

Azure Automation logo

(more…)

Please rate this

Accessing esxcli through PowerCLI
Posted by Mike Preston on October 4, 2017
5/5 (2)

Picture this – you are working away developing a PowerCLI script that is performing multiple actions – you have it just about complete when you come to a roadblock.  After frantically googling around you find out that this one task you are trying to perform simply cannot be done through PowerShell, yet you know it exists within the local ESXi esxcli command namespace!  This has happened multiple times to me and thankfully, there is a way to access ESXi’s esxcli command namespace without having to leave the comforts of the PowerShell Console.

Chances are that if you have been working at all with ESXi you are familiar with the esxcli command – but for those that aren’t let’s take a quick look at what exactly it does.

esxcli namespaces

(more…)

Please rate this

Google Cloud Trying to Catch Up: NVIDIA GPUs and Discounts for Virtual Machines
Posted by Augusto Alvarez on October 3, 2017
5/5 (1)

We’ve reviewed several times about the large supremacy around AWS and Azure regarding cloud services market share (more details about recent surveys can be found here: “AWS Bigger in SMBs but Azure is the Service Most Likely to Renew or Purchase”) and Google Cloud lands in third place for most of the services.  Now they are implementing new NVIDIA GPUs for their virtual machines and sustained discounts for customers using the new NVIDIA VMs.

NVIDIA GPUs for Google Compute Engine

(more…)

Please rate this

What is Veeam Powered Network (VeeamPN) and why you need it?
Posted by Alex Samoylenko on September 28, 2017
No ratings yet.

This spring, during VeeamON 2017, Veeam Software presented Veeam Powered Network (VeeamPN), their first solution in the field of enterprise networking.

Veeam PN is a simple tool for establishing VPN between all parts of a distributed infrastructure: the headquarters, remote and branch offices, employees working remotely, etc. This solution is based on the OpenVPN technology. It serves as a virtual machine, running in Microsoft Azure (Veeam PN Server) or client’s private cloud, with the components on customer’s sites (Veeam PN Gateway):

Veeam Powered Network for Azure

(more…)

Please rate this

The importance of IeeePriorityTag with converged RDMA Switch Embedded Teaming
Posted by Didier Van Hoye on September 27, 2017
No ratings yet.

Introduction

If you read my blog on Switch Embedded Teaming with RDMA (for SMB Direct) you’ll notice that I set the -IeeePriorityTag to “On” on the vNICs that use DCB for QoS. This requires some explanation.

When you configure a Switch Embedded Teaming (SET) vSwitch and define one or more management OS vNICs on which you enable RDMA you will see that the SMB Direct traffic gets it priority tag set correctly. This always happens no matter what you set the -IeeePriorityTag option to. On or Off, it doesn’t make a difference. It works out of the box.

mapped RDMA vNIC to their respective RDMA pNIC

(more…)

Please rate this