Get All-Flash Performance from a Disk with StarWind LSFS Technology

Fill in the Form to Continue
img

Host:
Max Kolomyeytsev, Product Manager, StarWind Software.

Duration: 40:44

PUBLISHED/UPDATED: December 16th, 2015

Key points of the webinar:

  • History of virtualization: how it got to virtualized shared storage and why
  • I/O Blender. What it is and why it affects storage performance
  • How to overcome I/O Blender
  • How to completely eliminate I/O Blender

It’s common to think that the only way to get high-performance storage is an all-flash arrays that are real budget-killers. However, StarWind knows the solution of this problem. Achieve all-flash performance for your virtual storage without breaking the bank with StarWind Log-Structured File System Technology (LSFS). With StarWind Log-Structuring technology, you can unchain inexpensive spindle drives to achieve the performance even better than some all-flash arrays could provide.

Webinar transcript

Virtue for Servers, Troublemaker for Storage

A while ago, like 10-15 years, no one really used virtualization. We had piles of servers sitting in our data centers. Everything was great, with the exception of our resources which were not intensively utilized. One server is running one application. And there is no way to be flexible in that kind of environment. The cluster has 2 hosts and it runs one application inside. There is no way to be flexible, no way to add applications without paying. Also, there is no way to manage your infrastructure from one place, and manage all your applications and see how much resources are consumed.
Virtualization, however, has changed the way it was. Even single server can run now multiple applications. We have got almost 99% resource utilization with multiple applications being consolidated within one host. In most data centers, it derives into virtual machines, virtual desktops and containers. Of course with virtualization we get much more effective storage management. We don’t have isles of storage anymore. We have a single resource pool which is used for all applications to work. And of course we have simple management. We get into a console and can see all our applications, all our virtual machines, and all our virtual clusters. In one place we can control everything.
Networking is virtual so there is no need to go to the data center to reconnect or rewrite something. Now you can do that from one management console, sitting with iPad and drinking cocktails wherever you are. The management got that simple. This webinar is about the solutions to storage problems connected to virtualization. Multiple applications have a huge hit on the storage even if we have a big RAID 10 array with all 15K SAS or the spindle drives. It really starts crawling when all applications access one array at the same time. As a result, the storage arrays, especially spinning ones, are getting really slow in the virtualization environment, and this is called I/O Blender effect.

I/O Blender, What Is It?

We have several virtual machines running different applications residing on one storage resource. If these applications want to use their two areas of storage on the disk array, the underlying storage starts doing random writes and reads all the time. It needs to service multiple applications and to get all the data from the entire drive almost instantly to all the applications to maintain the good service level, to maintain good latency. As the result, your SQL bases can work fine, your exchange works fine and your file server is fine. All these loads are now consolidated on one disk. It becomes a real issue to performance. You may see it as your applications are slow in the middle of a day or your virtual desktop sometime is slow when you log in in the morning. So a user experience this performance degradation because of I/O Blender.

Overcoming I/O Blender

How can we address the issue of I/O Blender in virtualization world? Basically, there are certain workarounds. We have got some typical workloads which are separated into groups. We can simply deploy one RAID group per this typical load. However, this introduces a problem we had a while ago with non-virtualized environments. We now have islands of resources and we can really move our applications between these resources flexibly and non-disruptively. We have got virtualization for flexibility and now we are given up flexibility for performance.
A second solution may be deploying all-flash storage for your virtualization systems. That is really easy solution to I/O Blender because flash is extremely fast, it does not know anything about random access problems, like the spindles. The spindles just fail, they start crunching and give 1000 IOPS instead of 20 000 IOPS. The issue is that all-flash storage is still really expensive even though the SSDs get cheaper and cheaper. There are certain vendors who provide really sweet pricing like $400 per terabyte of SSD. But you will not find these prices on the enterprise server market so it is something like go support yourself solution which is not kind of thing we are looking for in a virtualization world, especially if you want to put your business on such a solution.
A third solution for overcoming I/O Blender would be implementing storage which knows about I/O Blender and knows how to deal with it. I just call it a VM-Centric File system or VM-Centric Storage Solution.
So What is a VM-Centric Storage Solution? The main idea is to use StarWind with its Log-Structuring technology. We got I/O from applications coming down to storage. Inside the storage layer, StarWind has 3 levels of media where data is stored. It is not just like tearing where hard data is put on RAM and then on flash and then on spindle. There is actually a more intelligent approach to that. Firstly, I/O stream is buffered in RAM, then in RAM StarWind uses deduplication to get rid of identical blocks. It reduces the amount of data to be actually written on the disk, and the data about deduplicated blocks is written into a separate deduplication log. In the result, we know where to get that data from because there is no redundant block anymore. It is all stored in one place. After that StarWind LSFS consolidates all these small writes into a huge sequential block, and then writes these blocks to the underlying storage.
The key to eliminating the I/O Blender is to write all these blocks sequentially – we never use underlying spinning storage randomly. It gets streaming speed instead of random IOPS speed, which are better by the order of magnitude. This is how StarWind deals with I/O Blender. It is not a unique solution in the storage world. However, it is a unique solution in the software storage world. There are no other solutions which do full Log-Structuring in Virtual SAN world. You may see these technologies in companies like NetApp or Nimble with their technologies like NetApp WAFL or nimble CASL. They essentially do the same using proprietary hardware like DRAM cards and flash that are used for consolidating data before the data goes to spindles. But if you want a virtual one, StarWind is the only possible answer.

Parity RAID Resurrection & Other Miracles

What does this bring us in a virtualization environment? What was before and what we can use now for virtualization to get top-notch performance from seemingly slow storage? Firstly, with StarWind LSFS you will get certain storage miracles. You would never imagine RAID 5, which is vanished in all storage community, it is working now in virtualization. All parity RAIDs get vanished there. RAID 5/6/50/60 is typically crawled under virtualization workloads. In this case, if a drive fails, you can say goodbye to your production environment.
However, with StarWind Log-Structuring, RAID 5 or RAID 6 or its derivatives work better than RAID10 would do typically in virtualization environment. If you have 6 10K SAS drives, and put them in RAID 10, that RAID 10 would be slower than RAID 5 comprised of the same number of drives and StarWind Log-Structured File System used on top. Then NL-SAS RAIDs show more IOPS than 10K SAS RAID and sometimes than SSD array. This is all because we have streaming speed instead of random speed characteristic of our array working if we go with Log-Structured File System. Then if we try to apply this technology to an all-flash array, we also will get certain benefits. First of all, our SSDs live longer because we do full read-write of big blocks. We never do read modify write cycles to the hardware. We only do write, then mark blocks as clean and then write there again. We never remove data from there to rewrite it with other data. So this also results in a side effect which is really useful for SSDs, which is getting rid of spot burns.
If you have a really I/O hungry applications, like SQL database sitting on an all-flash, certain areas will get overwritten much more often than the other area of the flash array. These cells will wear out faster, which can result in a premature or array failure and necessity to replace the SSD. It is not really good scenario in a storage world. In typical environment when you do snapshots, you lose some of the performance because with every snapshot you either have to do a copy on write, so all new data is copied when it is writing, you double I/O on the array which causes a performance penalty, or you can do redirect on the wright, so all new data is now written to different spots on the storage.
With traditional storage approach that would cause a huge performance penalty because now your virtual machines are scattered across the drive and there is no single area where the entire virtual machine sits. Now we have a base, one snapshot, the second snapshot, third snapshot and the data is located in random places. It increases the random load, random nature of the load of the applications working in the virtual machines. With Log-Structured File System, since we always do sequential workload, every snapshot can do redirect on write with absolutely no performance penalty. That is an easy solution to the performance issues with snapshots in a virtualization environment and in shared storage environment. Inline deduplication does not cause any overhead with StarWind Log-Structured File System. Imagine you have standard Windows deduplication and StarWind inline deduplication.
The mechanisms are seemingly the same for the end user: you write some data and this data in the end consumes less space because redundant blocks are removed. But with Microsoft, the data is first written fully hydrated to the disk. You have written a terabyte of e-mail base to your desk. Firstly, it consumes the terabyte. Only then Microsoft scans it and deduplicates it. With StarWind, that duplication happens at the moment you write it to the disk. At the moment when you have written the data, it may be 200 gigabytes instead of a terabyte, which has much less impact on the storage array, and also when this data needs to be fetched again for reads, most of this data, because it is deduplicated and consumes less space, is available in RAM cache. We do not fetch the data from the disk now, we fetch it from RAM, which is much faster in comparison to work in spinning disk storage.

Use Cases

You may be wondering what would be the typical use case for StarWind LSFS? Log-Structured File System is not a remedy for all virtualization environments. There are certain use cases where StarWind Log-Structured File System shines. There are certain no-goes where you would rather want to implement a traditional file system which doesn’t do Log-Structuring. Virtualization, in general, would be the use case. If your environment has a lot of random writes, that is the place where you want to implement StarWind Log-Structured File System. With random writes removed, the performance goes sky high, with inline deduplication, the amount of data we keep in the cache is much higher.
In the read intensive environment, we see the downside of the Log-Structured File System. It is not something you would only see with StarWind, it is common for any of Log-Structured file systems, being a partially Log-Structured or fully Log-Structured File System. When you write the data in huge sequential blocks, the reads get randomized. If your environment is mostly read, you have a lot of databases where you write certain amount of files a day, but then at some point you need to read them a lot, like rescanning, doing reporting and stuff like that, as the result the read I/O gets really random. StarWind Log-Structured File System addresses that by using flash as read cache. A lot of vendors advertise that they have write cashing. But why do you really need flash write cache if your underlying spinning storage array performs streaming speeds in the virtualization workload scenario? If you can get 30 000 to 40 000 IOPS from RAID5, why would you spend money on a lot of write intensive SSDs to put write flash on it? Instead of it you just use commodity, read intensive SSDs which are both cheaper and more durable in this scenario for accelerating reads. In this case, every storage layer works for a purpose. Underlying spinning disk works for streaming writes where it shines, flash shines at random reads, and RAM shines for all I/O applications, but it works as a primary write cache and the deduplication log storage.
One more scenario where StarWind Log-Structured File System shines is the virtual desktop environments because of inline deduplications, Log-Structuring, and combination of being write intensive, which most virtual desktop environments are. Just for a reference – the typical redrive distribution in VDI environments is 80% write and 20% read and 90% of that I/O is almost full random. This is the environment where you want to implement Log-Structured File System. This is the environment where you will see all the benefits, where you will see that 100 virtual desktops actually occupy as much space as 10 virtual desktops would do. Most of the data processed throughout the day is pinpointed to flash cache and RAM cache of the device. At the end of the day that is the ultimate solution. We do not seek top-notch performance, we do not seek consolidation snapshots and stuff like that. What we seek is a good customer experience for users who are working with the virtualization environment. We just want our customers to be happy. And if something goes wrong, we want to be able to restore everything fast and smooth. Or if it breaks in the middle of the night, we just want to get a text saying that StarWind High Availability works just fine.
Power Your Business With StarWind!
Book Free Online Demo:

Thank you for your application!
We will reply to you shortly.