Design a ROBO infrastructure (Part 2): Design areas and technologies
Posted by Andrea Mauro on
February 24, 2017
In the previous post, we have explained and described business requirements and constraints in order to support design and implementation decisions suited for mission-critical applications, considering also how risk can affect design decisions.
Now we will match the following technology aspects to satisfy design requirements:
- Performance and scaling
- Risk and budget management
VMware’s vRealize Log Insight – The easy way to get datacenter insight
Posted by Michael Ryom on
December 21, 2016
For those of you who do not know vRealize Log Insight. It is a log collector and analyzer, with a pretty, simple and intuitive GUI. An easy way of managing logs and messages from all your datacenter devices. Best of all, it is not a VMware only product! In that sense that as long as the message is in syslog format it can be ingest by Log Insight. Wait, that is not all. If the device does not support syslog, like a Windows server. The Log Insight agent you can install on the OS in question. The Log Insight agent can handle Windows event viewer messages and log files of any kind. So even applications that do not adhere to any of the normal ways of logging messages (like syslog or windows event viewer), can still be collected, by specifying the location and format of the log file(s).
How to Deploy and Manage Software-Defined Networking using SCVMM 2016 – Part I
Posted by Charbel Nemnom on
December 7, 2016
Software Defined Networking (SDN) in Windows Server 2016 provides a method to centrally configure and manage physical and virtual network devices such as routers, switches, load balancers and gateways in your datacenter. Virtual network elements such as Hyper-V Virtual Switch, Hyper-V Network Virtualization, and RAS Gateway are designed to be integral elements of your SDN infrastructure.
Please note that you must install Windows Server 2016 Datacenter edition for Hyper-V hosts and virtual machines (VMs) that run SDN infrastructure servers, such as Network Controller and Software Load Balancing nodes. However, you can run Windows Server 2016 Standard edition for Hyper-V hosts that contain only tenant workload virtual machines that are connected to SDN-controlled networks.
vCenter Server High Availability Review – Part 2
Posted by Askar Kopbayev on
November 25, 2016
In this second part of VCHA review I will be covering some ‘gotchas’ and configuration steps that are not covered in the VMware availability guide We will also go through all steps of Advanced Configuration.
What is Key Storage Drive in Windows Server 2016 Hyper-V?
Posted by Charbel Nemnom on
October 13, 2016
Security is a critical requirement of any organization’s system. With the release of Windows Server 2016, Microsoft puts a lot of efforts around security and added a lot of new features. One hot feature that will add a lot of benefits to any small, medium and enterprise business environments is Shielded Virtual Machines and Key Storage Drive (KSD). Be sure that’s going to help you to increase the security whether you are a service provider or enterprise customer.
Manage VM placement in Hyper-V cluster with VMM
Posted by Romain Serre on
September 23, 2016
The placement of the virtual machines in a Hyper-V cluster is an important step to ensure performance and high availability. To make a highly available application, usually a cluster is deployed spread across two or more virtual machines. In case of a Hyper-V node is crashing, the application must keep working.
But the VM placement concerns also its storage and its network. Let’s think about a storage solution where you have several LUNs (or Storage Spaces) according to a service level. Maybe you have a LUN with HDD in RAID 6 and another in RAID 1 with SSD. You don’t want that the VM which requires intensive IO was placed on HDD LUN.
Don’t Fear but Respect Redirected IO with Shared VHDX
Posted by Didier Van Hoye on
August 25, 2016
When we got Shared VHDX in Windows Server 2012 R2 we were quite pleased as it opened up the road to guest clustering (Failover clustering in virtual machines) without needing to break through the virtualization layer with iSCSI or virtual Fibre Channel (vFC).
First of all, you need to be aware of the limits of using a shared VHDX in Windows Server 2012 R2.
- You cannot perform storage live migration
- You cannot resize the VHDX online
- You cannot do host based backups (i.e. you need to do in guest backups)
- No support for checkpoints
- No support for Hyper-V Replica
If you cannot live with these, that’s a good indicator this is not for you. But if you can, you should also take care of the potential redirected IO impact that can and will occur. This doesn’t mean it won’t work for you, but you need to know about it, design and build for it and test it realistically for your real life workloads.
How Transparent Page Sharing memory deduplication technology works in VMware vSphere 6.0
Posted by Alex Samoylenko on
May 30, 2016
You may know that memory page deduplication technology Transparent Page Sharing (TPS) becomes useless with large memory pages (it’s even disabled in the latest versions of VMware vSphere). However, this doesn’t mean that TPS goes into the trash bin, because when lacking resources on the host-server, ESXi may break large pages into small ones and deduplicate them afterwards. In the process, the large pages are prepared for deduplication beforehand: in case the memory workload grows up to a certain limit the large pages a broken into small ones and then, when the workload peaks, forced deduplication cycle is activated.
TBW from SSDs with S.M.A.R.T Values in ESXi
Posted by Oksana Zybinskaya on
May 23, 2016
Solid-State-Drives are becoming widely implemented in ESXi hosts for caching (vFlash Read Cache, PernixData FVP), Virtual SAN or plain Datastores. Unfortunately, SSDs have limited lifetime per cell. Its value may range from 1.000 times in consumer TLC SSDs up to 100.000 times in enterprise SLC based SSDs. Lifetime can be estimated by device TBW parameters provided by vendor in its specification, It describes how many Terabytes can be written to the entire device, until the warranty expires.
A closer look at NUMA Spanning and virtual NUMA settings
Posted by Didier Van Hoye on
April 28, 2016
With Windows Server 2012 Hyper-V became truly NUMA aware. A virtual NUMA topology is presented to the guest operating system. By default, the virtual NUMA topology is optimized by matching the NUMA topology of physical host. This enables Hyper-V to get the optimal performance for virtual machines with high performance, NUMA aware workloads where large numbers of vCPUs and lots of memory come into play. A great and well known example of this is SQL Server.