Recently, it became known that there is a private, mysterious network stretching between London and Frankfurt that is twice as fast as the normal Internet. The connection, provided by a series of microwave dishes on masts, was completely secret to anyone but one company. Only when a competitor completed its own microwave link between the two cities, the first company revealed that it too had a link between the cities in order to get a share in this potential market.
Similar stories can be found all over the world, but because these networks are privately owned, and because they are often used by financial groups trying to find an edge on the stock market and eke out a few extra billions, you have to investigate hard to find them.
If you ever worked in IT, you have heard the acronym RAID. RAID stands for Redundant Array of Independent (some call it Inexpensive) Disks. So, it basically refers to a group of disk logically presented as one or more volumes to the external system – a server, for instance.
The main two reasons to have RAID are Performance and Redundancy. With RAID, you can minimize the access time and increase the throughput of data. RAID also allows one or more disks in the array to fail without losing any data.
WD issued Blue and Green branded SATA SSDs, which are based on SanDisk technology for the first time.
The Green brand is for secondary storage, being reliable, cool and eco-friendly, whereas Blue ones are built for PC primary storage use. Other WD brand colours include Black for enthusiast products and Red for NAS and SOHO (small office, home office) use.
The Blue and Green SATA SSDs are designed to be used mainly in notebooks, PCs and workstations. The Blue product is optimized for multi-tasking and resource-heavy applications. Still, WD states that the Green SSDs deliver essential-class performance, and are a great option for every-day use.
The placement of the virtual machines in a Hyper-V cluster is an important step to ensure performance and high availability. To make a highly available application, usually a cluster is deployed spread across two or more virtual machines. In case of a Hyper-V node is crashing, the application must keep working.
But the VM placement concerns also its storage and its network. Let’s think about a storage solution where you have several LUNs (or Storage Spaces) according to a service level. Maybe you have a LUN with HDD in RAID 6 and another in RAID 1 with SSD. You don’t want that the VM which requires intensive IO was placed on HDD LUN.
Samsung announced its 960 PRO and 960 Evo, the next generation M.2 PCIe SSDs. Like the 950 Pro, the 960 Pro and 960 Evo are PCIe 3.0 x4 drives using the latest NVMe protocol for data transfer. The 960 Pro offers a peak read speed of 3.5GB/s and a peak write speed of 2.1GB/s, while the Evo offers 3.2GB/s and 1.9GB/s respectively. The 950 topped out at a mere 2.5GB/s and 1.5GB/s.
The 960 Pro and the 960 Evo are planned for release in October. The Pro starts at $329 for 512GB of storage, rising up to a cool $1,299 for a 2TB version. The Evo price goes from $129 for a 250GB version to $479 for a 1TB version.
In Windows Sever 2016 Microsoft improved Hyper-V backup to address many of the concerns mentioned in our previous Hyper-V backup challenges Windows Server 2016 needs to address:
They avoid the need for agents by making the API’s remotely accessible. It’s all WMI calls directly to Hyper-V.
They implemented their own CBT mechanism for Windows Server 2016 Hyper-V to reduce the amount of data that needs to be copied during every backup. This can be leveraged by any backup vendor and takes away the responsibility of creating CBT from the backup vendors. This makes it easier for them to support Hyper-V releases faster. This also avoids the need for inserting drivers into the IO path of the Hyper-V hosts. Sure the testing & certification still has to happen as all vendors now can be impacted by a bug MSFT introduced.
They are no longer dependent on the host VSS infrastructure. This eliminates storage overhead as wells as the storage fabric IO overhead associated with performance issues when needing to use host level VSS snapshots on the entire LUN/CSV for even a single VM.
This helps avoid the need for hardware VSS providers delivered by storage vendors and delivers better results with storage solution that don’t offer hardware providers.
Storage vendors and backup vendors can still integrate this with their snapshots for speedy and easy backup and restores. But as the backup work at the VM level is separated from an (optional) host VSS snapshot the performance hit is less and the total duration significantly reduced.
It’s efficient in regard to the number of data that needs to be copied to the backup target and stored there. This reduces capacity needed and for some vendors the almost hard dependency on deduplication to make it even feasible in regards to cost.
These capabilities are available to anyone (backup vendors, storage vendors, home grown PowerShell scripts …) who wishes to leverage them and doesn’t prevent them from implementing synthetic full backups, merge backups as they age etc. It’s capable enough to allow great backup solutions to be built on top of it.
When we got Shared VHDX in Windows Server 2012 R2 we were quite pleased as it opened up the road to guest clustering (Failover clustering in virtual machines) without needing to break through the virtualization layer with iSCSI or virtual Fibre Channel (vFC).
First of all, you need to be aware of the limits of using a shared VHDX in Windows Server 2012 R2.
You cannot perform storage live migration
You cannot resize the VHDX online
You cannot do host based backups (i.e. you need to do in guest backups)
No support for checkpoints
No support for Hyper-V Replica
If you cannot live with these, that’s a good indicator this is not for you. But if you can, you should also take care of the potential redirected IO impact that can and will occur. This doesn’t mean it won’t work for you, but you need to know about it, design and build for it and test it realistically for your real life workloads.
To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing. While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.
I am not against faster I/O processing, of course. It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine. Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle. That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.
vSphere Replication has proved to be a great bonus to any paid vSphere license. It is an amazing and simple tool that provides cheap and semi-automated Disaster Recovery solution. Another great use case for vSphere Replication is migration of virtual machines.
vSphere Replication 6.x came with plenty of new useful features:
Network traffic compression to reduce replication time and bandwidth consumption
Linux guest OS quiescing
Increase in scalability – one VRA server can replicate up to 2000 virtual machines
Replication Traffic isolation – that is what we are going to talk today.
The goal of traffic separation is to enhance network performance by ensuring the replication traffic does not impact other business critical traffic. This can be done either by using VDS Network Input Output Control to set limits or shares for outgoing or incoming replication traffic. Another benefit of traffic isolation addresses security concern of mixing sensitive replication traffic with other traffic types.
Many of you know that VMware has a technology called vSphere Integrated Containers (VIC). It involves launch of Docker (and others) virtualized containers in small virtual machines with a lightweight operating system based on Linux distribution.
This operating system is VMware Photon OS 1.0, which has been finally released just recently. This is the first release version of this operating system from VMware, but in the long view it can become the main platform for virtual appliances by replacing the everlasting SUSE Linux.