As you know, the main virtualization conference VMworld 2016 arranged by VMware is now being held in Las Vegas. On the first day of the conference several interesting announcements were made. For example, VMware Cloud Foundation, which soon will be available on the IBM platform and later with other vendors, as well, was presented. It allows to get a ready-made infrastructure at customer’s site with both necessary software and hardware components and ready, configured and integrated control and automation tools like NSX, Virtual SAN and vRealize:
The Auto Deploy is one of the underestimated vSphere features. I have seen many vSphere Designs where using Auto Deploy was outlined as overcomplicating and manual build of ESXi servers was preferred. That is pretty frustrating as we, as IT professionals, strive to automate as much as possible in our day to day work.
Configuring Auto Deploy is definitely not as simple as VSAN for instance, but using Auto Deploy really pays off when you manage hundreds and thousands of ESXi hosts.
Yesterday I saw a blog post in Homelab subreddit discussing what Intel NUC to choose. I have spent quite some time recently to choose the right server for my homelab expansion and I have considered a lot of options.
I was also looking at Intel NUC as many other fellow IT professionals, but luckily last month I read on Tinkertry.com that Supermicro had just released new Mini-1U SuperServers – SYS-E300-8D and SYS-E200-8D. I had some discussions with my colleagues and other people on Reddit and TinkerTry and I came to the conclusion that if you are aimed to run home lab for virtualization Intel NUC shouldn’t be considered. I believe SuperMicro is a new king on the market of mini servers for home lab.
VMware has long stopped adding newly released features and functionality into the old С# client in hopes to push their customers into using the vSphere Web Client. However, even by restricting new features only to the Web Client adoption has been slow – partly due to change, no one likes change, but mostly due to the slowness and the overall sluggishness that is experienced using the flash based vSphere Web Client.
Just this year with the release of vSphere 6.0 Update 2 we saw something called the “Embedded Host Client” make its way into a release – allowing us to manage our individual ESXi hosts with an HTML5 based interface built right into the product. Now we are seeing this same type of HTML5 technology being used with the vSphere HTML5 Web Client fling being released.
vSphere Replication has proved to be a great bonus to any paid vSphere license. It is an amazing and simple tool that provides cheap and semi-automated Disaster Recovery solution. Another great use case for vSphere Replication is migration of virtual machines.
vSphere Replication 6.x came with plenty of new useful features:
Network traffic compression to reduce replication time and bandwidth consumption
Linux guest OS quiescing
Increase in scalability – one VRA server can replicate up to 2000 virtual machines
Replication Traffic isolation – that is what we are going to talk today.
The goal of traffic separation is to enhance network performance by ensuring the replication traffic does not impact other business critical traffic. This can be done either by using VDS Network Input Output Control to set limits or shares for outgoing or incoming replication traffic. Another benefit of traffic isolation addresses security concern of mixing sensitive replication traffic with other traffic types.
As far back as at VMworld 2014, VMware announced VMware Project Fargo technology, also broadly known as VMFork. It allows to make a working copy of a running virtual machine on VMware vSphere platform very fast.
The VMFork technology involves on-the-fly creation of virtual machine clone (VMX-file and process in memory), which uses the same memory (Shared memory) that the parent VM does. At the same time, the child VM cannot write to the shared memory and uses the allocated memory to write its own data. With disks, it is just the same: with the Copy-on-write technology, the changes of the parent VM base disk are written to the child VM delta disk:
As we know, VMware‘s first attempt in the field of hyperconvergence, the EVO:RAIL, was a good quality software product set, which nevertheless failed because of the licensing policy. Specifically, it demanded that buyers acquire new vSphere licences, with no exception for existing vSphere users who liked the idea of adopting hyperconverged infrastructure.
When time comes to deciding whether to go with vSphere Distributed Switch or Cisco Nexus 1000v it is hard to tell which product is superior and you find many different and quite contradictory opinions.
While quite often it is the political decision based on the answer to the question “Who is going to manage the virtual networking?” there are many other aspects you, as an infrastructure designer, should be aware of.
Recently VMware announced End of Sale of Nexus 1000v which caused some confusion amongst clients. I know customers who were pretty sure Cisco discontinued Nexus 1000v, but rest assured, Cisco is still fully committed to continue development of virtual networking and to support Nexus 1000v in the latest and future versions of vSphere.
Many of you know that VMware has a technology called vSphere Integrated Containers (VIC). It involves launch of Docker (and others) virtualized containers in small virtual machines with a lightweight operating system based on Linux distribution.
This operating system is VMware Photon OS 1.0, which has been finally released just recently. This is the first release version of this operating system from VMware, but in the long view it can become the main platform for virtual appliances by replacing the everlasting SUSE Linux.
This is a comprehensive comparison of the leading products of the Software-Defined Storage market, featuring Microsoft Storage Spaces Direct, VMware Virtual SAN and StarWind Virtual SAN. It provides numerous use cases, based on different deployment scales and architectures, because the mentioned products all have different aims. As the market is already large enough, the vendors used to dwell its different parts, but lately they entered a full-scale competition, adapting their products to meet general demand. This post is an analysis of how Microsoft, VMware and StarWind fare in in the Software-Defined Storage market right now. The approach is practical and all the statements are based on the experience of virtualization administrators and engineers from all over the world.