To hear advocates talk about NVMe – a de facto standard created by a group of vendors led by Intel to connect flash memory storage directly to a PCIe bus (that is, without using a SAS/SATA disk controller) – it is the most revolutionary thing that has ever happened in business computing. While the technology provides a more efficient means to access flash memory, without passing I/O through the buffers, queues and locks associated with a SAS/SATA controller, it can be seen as the latest of a long line of bus extension technologies – and perhaps one that is currently in search of a problem to solve.
I am not against faster I/O processing, of course. It would be great if the world finally acknowledged that storage has always been the red-headed stepchild of the Von Neumann machine. Bus speeds and CPU processing speeds have always been capable of driving I/O faster than mechanical storage devices could handle. That is why engineers used lots of memory – as caches ahead of disk storage or as buffers on disk electronics directly – to help mask or spoof the mismatch of speed.
Windows Server 2016 – Storage Spaces Direct Hyper-converged [image credit: Microsoft]
With the release of Windows Server 2016, Microsoft is introducing Storage Spaces Direct (S2D), which enables building highly available Software-Defined Storage systems with local attached storage. This storage can be leveraged by VMs running on the same cluster (in hyper-converged mode) or the storage can be presented as a File Share (in disaggregated mode). The hyper-converged deployment scenario has the Hyper-V (compute) and Storage Spaces Direct (storage) components on the same cluster. Virtual machine’s files are stored on local CSVs. Once Storage Spaces Direct is configured and the CSV volumes are available, configuring and provisioning Hyper-V is the same process and uses the same tools that you would use with any other Hyper-V deployment on a failover cluster.
Windows Server 2016 will be released the next month said Microsoft the last month. Windows Server 2016 brings a lot of new features compared to the last Windows Server version for Hyper-V, networking and storage. In this topic I will try to convince you to move from prior Windows Server edition to Windows Server 2016 with eight reasons.
With Windows Server 2016 we have gained some very welcome capabilities to do cost effective VDI deployments using all in box technologies. The main areas of improvement are in storage, RemoteFX and with Discrete Device Assignment for hardware pass-through to the VM. Let’s take a look at what’s possible now and think out loud on what solutions are possible as well as their benefits and drawbacks.
When I first started working for my current employer back in 2013 one of the first projects was to address business continuity concerns. The Brief was very… brief… no details other than “we have another building on the site which is linked by fibre optic, if our current physical servers had a problem we want to be back up and running in under 4 hours”. No guidance on what we should use or how to accomplish this, so off I went in search for solutions.
The hyperconverged infrastructure appliance market (HCIA) is growing fast and hardware competitors are using Dell, HP, SuperMicro and other servers and not Cisco’s UCS hardware. EMC only covers a portion of the market so Cisco should address the rest in order to both preserve its current UCS market share and grow it further.
Tell us anything you want about how you did hyperconverged project and get $500.
Share your experience and get $500 by participating in StarWind contest.
Write an interesting article focusing on the hyperconverged storage architecture. Send your story to us and we’ll publish it on our blog.
It used to be that, when you bought a server with a NIC card and some internal or direct attached storage, it was simply called a server. If it had some tiered storage – different media with different performance characteristics and different capacities – and some intelligence for moving data across “tiers,” we called it an “enterprise server”. If the server and storage kit were clustered, we called it a high availability enterprise server. Over the past year, though, we have gone through a collective terminology refresh.
Today, you cobble together a server with some software-defined storage software, a hypervisor, and some internal or external flash and/or disk and the result is called “hyper-converged infrastructure.” Given the lack of consistency in what people mean when they say “hyper-converged,” we may be talking about any collection of gear and software that a vendor has “pre-integrated” before marking up the kit and selling it for a huge profit. Having recently requested information from so-called hyper-converged infrastructure vendors, I was amazed at some of the inquiries I received from would-be participants.