Deploying SQL Server 2016 Basic Availability Groups Without Active Directory. Part 1: Building the Platform
Posted by Edwin M Sarmiento on October 31, 2017
5/5 (3)

Introduction

When Availability Groups were introduced in SQL Server 2012, they were only available in Enterprise Edition. This made it challenging to move from Database Mirroring to Availability Groups, especially if you’re running Standard Edition.  To upgrade and migrate from Database Mirroring in Standard Edition, you either choose to upgrade to a more expensive Enterprise Edition license and implement Availability Groups or stick with Database Mirroring and hope that everything works despite being deprecated.

SQL Server 2016 introduced Basic Availability Groups in Standard Edition, allowing customers to run some form of limited Availability Groups. Customers now have a viable replacement for Database Mirroring in Standard Edition. However, unlike Database Mirroring, Availability Groups require a Windows Server Failover Cluster (WSFC). SQL Server database administrators now need to be highly skilled in designing, implementing and managing a WSFC outside of SQL Server. Because the availability of the SQL Server databases relies heavily on the WSFC.

SQL Server 2016 logo

Learn More

Please rate this

vSphere Upgrade Options
Posted by Mike Preston on October 26, 2017
5/5 (1)

When it comes time for your vSphere upgrade there are many different approaches to how you perform the upgrades on your ESXi hosts.   An administrator who looks after a small cluster may update one way, whereas an administrator who looks after an enterprise with 1000s of hosts may opt to go another.  Also, depending on how your environment is deployed you might want to choose one method over another.  Factors such as a whether or not your hosts are managed by a vCenter server, whether or not they are members of a cluster – these things all impact the methods in which you chose to update to the latest version of ESXi.  Certainly, some methods are much more simplistic than others to perform, some offer more advantages when upgrading at scale, and some are more prone to user error – let’s take a look at each method of upgrading our hosts below and discuss the benefits and drawbacks of each…

VMware vSphere logo

Learn More

Please rate this

Best Freeware for VMware vSphere – RVTools
Posted by Vladan Seget on October 25, 2017
4/5 (2)

One of the best freeware applications which gather a lot of information about VMware vSphere id definitely RVTools utility. Today we’ll have a look at some features which are the most useful ones for IT admins.

RVTools is a Windows .NET 4.0 application which uses the VI SDK to display information about your virtual environments. So, before you download and install the tool, you’ll need to check if your Windows system has at least .NET 4.0 installed.

RVTools and Menu items

Learn More

Please rate this

How to configure a Multi-Resilient Volume on Windows Server 2016 using Storage Spaces
Posted by Vitalii Feshchenko on October 24, 2017
5/5 (3)

Introduction

Plenty of articles have been released about Storage Spaces and everything around this topic. However, I would like to absorb all actual information and lead you through the journey of configuring Storage Spaces on a Standalone host.

The main goal of the article is to show a Multi-Resilient Volume configuration process.

How it works

In order to use Storage Spaces, we need to have faster (NVMe, SSD) and slower (HDD) devices.

So, we have a set of NVMe devices along with SAS HDD or SATA HDD, and we should create performance and capacity tier respectively.

NVMe tier is used for caching. When hot blocks are written to the storage array, they are written to the caching tier first (SSD’s or NVMe):

Data in Performance Tier

Learn More

Please rate this

StarWind Maintenance Mode Overview
Posted by Ivan Talaichuk on October 19, 2017
No ratings yet.

Howdy, folks! I would like to start my tale with a little backstory regarding usefulness which the “maintenance mode” brings to us. And in order to do that, I’ll start from the times when updates have led to the downtime for production.

That’s not a secret for anyone that any production environment sometimes needs to be maintained. It could either be a software update or a hardware reconfiguration. To do this, the administrator should stop the production server for a certain period of time, and this may affect the reliability of the production environment. For example, the fault tolerance level can be decreased, as well as the performance. This is especially critical for small infrastructures which consist of 2 nodes.

So, let’s take a closer look at StarWind maintenance mode and what it delivers to us. First of all, it eliminates the downtimes caused by the planned nodes shutdowns and thus, allows keeping nodes in the pre-synchronized state so that synchronization resumption would not be needed. As a result, the system doesn’t experience any performance and availability degradation.

Enable the maintenance mode on HAimage

Learn More

Please rate this

Cost and License considerations between Always On Availability Groups and Always On Basic Availability Groups
Posted by Shashank Singh on October 17, 2017
5/5 (2)

Windows Server edition considerations

With Windows Server 2012 and above, Standard Edition now has full support for clustering, not just simple 2-node active/passive clusters, but fully configured clustering support.  Before Windows Server 2012, only Windows Server Enterprise Edition could support Windows Server Failover Clustering (WSFC). Starting from Windows Server 2012, clustering got a huge licensing cost reduction.

The cost of Windows Server 2012 Standard is almost the same as that of Windows Server 2008 R2 Standard, but Windows Server 2012 Datacenter Edition has almost 26% price increase. There is no difference throughout feature support between Windows Server 2012 Standard and Datacenter edition; the major difference is that Standard only supports hosting of 2 virtual machines (by default), while in Datacenter this is unlimited. You can host more than 2 VMs on Standard, but that will imply an extra cost.

BAG vs AG

Learn More

Please rate this

SMB Direct in a Windows Server 2016 Virtual Machine Experiment
Posted by Didier Van Hoye on October 12, 2017
5/5 (1)

Introduction

Ever since Windows Server 2012 we have SMB Direct capabilities in the OS and Windows Server 2012 R2 added more use cases such as live migration for example. In Windows Server 2016, even more, workloads leverage SMB Direct, such as S2D and Storage Replication. SMB Direct leverages the RDMA capabilities of a NIC which delivers high throughput at low latency combined with CPU offloading to the NIC. The latter save CPU cycles for the other workloads on the hosts such as virtual machines.

Traditionally, in order for SMB Direct to work, the SMB stack needs direct access to the RDMA NICs. This means that right up to Windows Server 2012 R2 we had SMB Direct on running on physical NICs on the host or the parent partition/management OS. You could not have RDMA exposed on a vNIC or even on a host native NIC team (LBFO). SMB Direct was also not compatible with SR-IOV. That was and still is, for that OS version common knowledge and a design consideration. With Windows Server 2016, things changed. You can now have RDMA exposed on a vSwitch and on management OS vNICs. Even better, the new Switch Embedded Teaming (SET) allows for RDMA to be exposed in the same way on top of a vSwitch. SET is an important technology in this as RDMA is still not exposed on a native Windows team (LBFO).

Mellanox InfiniBand Router

Learn More

Please rate this

Back to Enterprise Storage
Posted by Jon Toigo on October 11, 2017
No ratings yet.

An under-reported trend in storage these days is the mounting dissatisfaction with server-centric storage infrastructure as conceived by proprietary server hypervisor vendors and implemented as exclusive software-defined storage stacks.  A few years ago, the hypervisor vendors seized on consumer anger around overpriced “value-add” storage arrays to insert a “new” modality of storage, so-called software-defined storage, into the IT lexicon.  Touted as a solution for everything that ailed storage – and as a way to improve virtual machine performance in the process – SDS and hyper-converged infrastructure did rather well in the market.  However, the downside of creating silo’ed storage behind server hosts was that storage efficiency declined by 10 percent or more on an enterprise-wide basis; companies were realizing less bang for the buck with software-defined storage than with the enterprise storage platforms they were replacing.

Shared and Silo'ed storages' components

Learn More

Please rate this

The dark side of converged storage networks
Posted by Andrea Mauro on October 10, 2017
No ratings yet.

Introduction

The fabric of SAN (means Storage Area Network) with Fibre Channel solutions have always been a dedicated network, with dedicated components (like FC switches).

But, starting with iSCSI and FCoE protocols, the storage fabric could now be shared with the traditional network infrastructure, because at least level 1 and 2 have a common Ethernet layer (for iSCSI also layer 3 and 4 are the same of TCP/IP networks).

Hosts (the initiators) in a converged network use typically Converged Network Adapters (CNAs) that provide both Ethernet and storage functions (usually FCoE and iSCSI). The results are that LAN and SAN are shared on the same physical network:

LAN and SAN shared on the same physical network

Learn More

Please rate this

[Azure Automation] Migrate your scripts to Azure – Part 2
Posted by Florent Appointaire on October 5, 2017
No ratings yet.

I recently looking for Azure Automation, from top to bottom. It’s why, in the next 2 articles, we will see how to use this tool, from A to Z:

  • [Azure Automation] Interface discovery – Part 1
  • [Azure Automation] Migrate your scripts to Azure – Part 2 (this post)

Today, we will see how to migrate your On-Premises scripts, to Azure Automation.

Azure Automation logo

Learn More

Please rate this