Maximize your Data Center Efficiency with StarWind HyperConverged Appliance Powered by Mellanox 100GbE Network

Fill in the Form to Continue
img

Host:
Max Kolomyeytsev, Director of Product Management, StarWind
Motti Beck, Director, Enterprise Market Development, Mellanox Technologies

Duration: 43:42

PUBLISHED/UPDATED: April 20th, 2016

Key points of the webinar:

  • Overview of Mellanox interconnect solutions
  • What is StarWind HyperConverged Appliance
  • How to achieve higher IOPS in your virtualization infrastructure with StarWind HCA powered by Mellanox 100GbE networking

Accelerate your applications performance in your virtualized Data Center over high-bandwidth networking with StarWind and Mellanox joint solution. Build your virtualized infrastructure using StarWind HyperConverged Appliance powered by Mellanox 100GbE network solution, to achieve higher IOPS at a lowest cost of ownership.

Watch Now
Webinar transcript

Mellanox – Leading Supplier of End-to-End Interconnect Solutions

Mellanox is the company that provides end-to-end solutions for data center and has all the technologies and all the speeds, all the standards that are in the datacenter itself, including the Ethernet RoCE, which is RDMA over converged Ethernet, and also the InfiniBand. We support all the speeds between 10 to 100, that includes 25, 40, 50, 56, as it is particular for InfiniBand, and, of course, the 100 itself. Mellanox is unique. It doesn’t just provide an end-to-end solution – all the products are done by its team. All the components that are part of their solutions are done, designed and manufactured by them. This includes the ICs, the controllers themselves, the adapter cards, the switches, software that helps you to manage and control the datacenter, and also the cables themselves that are pretty challenging because of the 100 Gb per second.

10 GbE Architecture Efficiency Maxed-Out

Why most of the deployments today are over 10 Gb and why we decided to go to higher than 10 Gb, up to 100? The reason is that based on the market trends, 100 Gb pretty much maxed out. In the datacenter, they started to be ultrafast and ultra-scalable. The demand for higher bandwidth and lower ROI or higher ROI is growing. And for doing it you need to support more virtual machines per system. You want to maximize your CPU utilization, but you don’t want to spend too many cycles for running jobs that are not associated with the application itself. And, of course, you want to use less space, to maximize the space that you have – those are the market trends. We are also living in the mobile world, and you need to support the millions of mobile users every day. Processing data in real time is pretty much given those days. Therefore, it’s not just doing it, but also doing it on time and in the most efficient way. This is one of the reasons why 10GbE maxed out and the industry actually requested and already deployed higher than 10GbE, whether it’s 25 or, actually, 100GbE.

Entering the Era of 25GbE, 50GbE and 100GbE

Our products include more than 10GbE, including the recent product that we introduced that includes four components. One of them – the LinkX that includes our products, like Copper, which can support up to 600 m 100 GB/s, and it’s pretty challenging to support 100 GB/s on Copper, and we have it not just on Copper, but also on Optical. These are two products: VCSEL and Silicon Photonics.
ConnectX 4 can support all the speeds between 10GbE to 100GbE. If you don’t need 100GbE, you can use our ConnectX 4 Lx that can support up to 50GbE. The 10-25GbE are using SFP+ connector, and 40-50GbE. And if you want 56GbE, that’s also Ethernet (it’s not an industry standard, but we support it), it runs over QSFP. To connect all of them in the Spectrum, there is our latest switch that can support up to 100 GB/s, 32 ports. If you don’t need 100GbE, you can use each one of the ports in a lower speed, then with the splitter cables you can support more ports.
Our Spectrum is the leading switching product in the market today. It’s open. This is one of the things that Mellanox is very proud about since the first day of the company we supported the open technology. Even the InfiniBand that the company started with was an open industry. And we give our customers the opportunity to use the switch with other networking systems, whether it’s Cumulus, their own operating system, networking operating system. Or, if you want, you can use Mellanox operating system with a very stable and rich SDK that you can port your operating system to run over it. With the other product that we have you can use industry product that helps you to manage and control the data center. You can also use Mellanox own Neo. As you can see, we have the full end-to-end solution. Everything is developed by Mellanox.

The Road to 25, 50, and 100GbE

Actually, you can ask yourself, why we came to 100 GB/s. More than 40 years ago, Bob Metcalfe developed and started to promote the idea of Ethernet. About 40 or 30 years later, at the beginning of 2000, specifically 2003, the industry got the 10GbE. And with the 10GbE, it is going to be over one lane; if you need to have more than 10GbE, then you are going to use 4 lanes to get 40GbE. According to IEEE industry, the next speed was 100, that was 4×25. Once we started to see that there is higher demand for more than even 40 (and Mellanox actually is the leader on the 40 GbE today, we believe that we have more than even 90% of the market share), the industry requested not just 100, but something in between. We asked the IEEE to standardize 50, which is 2×25 and 1×25, this is the 25 and 50. The IEEE at the beginning didn’t accept it, and then, with leading companies, like Google and Microsoft, we promoted the industry standard that eventually was adopted by the IEEE. Today, those are the standards and you have 25, 50 and 100. And 25 is going to be the next generation or the next 10. 50 is going to be the next 40. And 100 is going to be here as well.

Moving to Mellanox 25GbE, 50 GbE and 100GbE

It is going to be pretty simple migration pass. There is no need to do any modification to the current architecture. It’s the same connectors. 25 has the same connector as 10, this is SFP+. 50 – the same connector as 40, uSFP. But you’re going to get a better cost. The deployment is being done today on 10, 40, between the storage and the compute and on the aggregation layer. You’re going to get the same, but much higher efficiency, when you’re going to use 25, 50 and 100. Pretty simple, very easy migration, just higher efficiency.

Higher Interconnect Performance Translates to Higher ROI

Performance actually affects the ROI. In our particular case it’s a VDI deployment analyzing if you need to have 5000 Virtual Desktops. The number of desktops per server is going higher the higher the bandwidth is. You have more Virtual Desktops per server, you need less servers, less switches and less cables. Therefore, you can actually see here that the effect between 10 and 50 is about 3x better efficiency on 50. And we didn’t do yet 100, but it’s going to be much higher. This is one benchmark that shows the efficiency.

Faster Storage Needs, Faster Networks!

Another development in the industry is that the hard disk drive starts to be history and you start to see more flash or SSDs, specifically the recent one that are the NVMe Flash Drive, and with one ConnectX 4 you can support up to three of them. Of course, you can use less than that, but then you’ll need to have more cables, more switch ports, which is going to affect, of course, the overall space power and it’s going to cost more. This is showing the strength and this is where 100 shines. Later we’ll discuss how 100 is being used in Storage Appliance and HyperConverged Appliance. Another way to increase the efficiency is not just to use the Ethernet, but to use the RoCE. 100 Gb can run over TCP/IP, and with RoCE it runs over RDMA. RDMA basically totally offloads the transport task from running on the CPU to the I/O controller itself. Or, when you’re running TCP, you need to run the TCP stack, which is pretty heavy, and today the CPUs are pretty expensive, about at least the recent one with 12 cores about 2000 each.
You don’t want to use the CPU to run jobs or tasks that are not associated with the workload itself. You want to offload it to the I/O controller that is pretty much 10% of the overall cost, the cost is 10% of the CPU cost. Therefore, to run 100 TCP/IP about 8 cores were used, and actually this is why we got only 54 Gb/s. Of course, if we would not limit, we would be able to get 91, like we got on the RoCE, but then not much would be left to run the application itself. It’s not just the technology that we have, it’s not about higher speed, but also efficient offloads that make the overall solution much better. And there are other technologies, specifically for the storage, like iSER, that will go deep a little bit later, as iSER is iSCSI over RDMA that makes the iAccess or the storage itself faster.

Faster Data Movement Enables Better Data Utilization

In summary, if you are deploying now or considering to deploy a new infrastructure and to build your next-generation data center, it’s going to be beneficial for you to consider higher bandwidth, up to 100 Gb/s.

What is StarWind HyperConverged Appliance

Definitely, it’s a really interesting transition we recently made with our highest line of the appliances. More and more data appears in our customers’ data centers. There is a need to analyze it and make use of it. It’s not only about CPU cycles right now. The more sense we can make of the data, the better it is for the business. So, the key is to start asking your supercomputers the right question. And StarWind idea is to bring supercomputers to your data center without actually buying giant racks, but just having them in small form factors, like 4 units or 8 units, which will totally suffice for some home-grown nuclear analytics.
A little bit about StarWind HyperConverged Appliance. It employs all the components one needs to fulfill his application demands. And in this particular case these are almost the top-notch, the most I/O demanding applications. That is our decision to switch from 40 and 10 Gb Ethernet to 100 Gb Ethernet with the help of Mellanox technology. StarWind HyperConverged Appliance delivers amazingly fast and easy way to manage your business-critical applications. It unifies commodity servers, storage, networking and the hypervisor into a single manageable layer. There are multiple brands involved in this appliance. We try to utilize only the best-of-breed components: Dell servers, Mellanox networking, Intel flash technology, best-of-breed hypervisors of choice – it can be either vSphere or Hyper-V (and we’re working on supporting other hypervisors as well). StarWind does the storage management and in order to fulfill the backup plan we’ve got Veeam working for us, and we can also do a disaster recovery to Azure.

Some Coincidences

There are some interesting coincidences in the industry. That doesn’t mean that StarWind is the one who pushed the whole industry going to flash and 10GbE or 100GbE, but we definitely have our trace in adopting that Hi-Tech and increasing the actual adoption by small and medium companies.
Back in 2011, we saw that along with virtualization 10 Gb Ethernet became a commodity in most of our environments. That seems obvious, but it’s an interesting pace where we first showed that StarWind and iSCSI can operate at high I/O rates. When we showed the initial test results, when StarWind tested performance and showed 1 mln IOPs, that was a small step pushing people towards 10 Gb Ethernet and seeing it as a good alternative to existing Fibre Channel configurations.
As we later saw PCIe SSDs adoption, like Fusion-io, in more and more cases, people just consider Mellanox 40 Gb Ethernet. From our internal specifics we see that up to 80% of setups utilized 40 Gb, and the rest used 10 Gb Ethernet or they considered 40 Gb Ethernet. After 2013, we see the history of NVMe SSD adoption. First, we saw it adopted by fewer universities, then there were two or three service providers trying to increase the performance of their highest tier. These companies were already using 40 Gb Ethernet by default. There were just two 1 Gb ports for management, everything else was packed with 40 Gb Ethernet. They were exploring faster option.
That was the first time when we explored how we can go higher than 40 without just using 50 or 56 Gb. That is when we did our first experiments with 100 Gb Ethernet with Mellanox. In the nearest future we will see the release of 3D XPoint. If you’re not familiar with the technology, it’s basically NVMe, but it is 10 times denser, it’s much more durable and it will be much more affordable. Basically, the whole market will explode. Imagine that you have an option to go all-flash for half the price you typically see when looking at all-flash solution. Therefore, as soon as the new technology appears, the demand for higher performance networking interconnect will explode as well.

Why 100 GbE?

Nothing is wrong with 40 Gb. There are just some applications and workloads which benefit from 100 Gb Ethernet. That’s absolutely not a question to a particular model. It’s just the case where you need as much as you can get. As mentioned before, RoCE is a crucial part of high-performance networking. Essentially, if 10 GbE was an instant message to your smartphone, 100 Gb Ethernet and RoCE is just equal to telepathic messaging right into your brain. A new acronym iSER is essentially the same brain-to-brain interface, but for the storage.
A StarWind leverage is server’s RAM as distributed by back cache. It is really important for us to use all the optimization technologies to synchronize that RAM between the boxes as fast as possible. Mellanox 100 Gb Ethernet makes this a reality. In addition, 100 Gb delivers huge flexibility to the cluster networking. Virtualized networking will never become a bottleneck, if you have 100 Gb between the servers. It is really easy to manage, you can always provision more performance to your system, and you are not worrying about your interconnect being a bottleneck, in any case.
And, of course, future proof design. We have new technologies of just popping up, being first available as a test product engineering samples, but just in a few months we’ll start seeing those technologies appearing in all solutions. And I’m talking about 3D XPoint into Octane and within strong various providers giving you an ability not only to utilize that 100 Gb Ethernet, but to get maximum performance for the application. These are the best instruments to make sure we get high performance from the storage stack and we get high performance from our networking stack. Hopefully, the next bottleneck will not become a CPU, and it will develop in a balanced manner without bottlenecks.

Performance Comparison

Here is a quick comparison between our standard HyperConverged Appliance, the All-Flash Appliance and the NVMe Appliance we’re discussing today. There is a 50 times difference between the NVMe and standard hybrids SAS blocks SSD configuration, so we’re talking about really demanding I/O applications.
Some people who want higher performance jump from hybrid to all-flash configuration, but for ultimate performance and for really high I/O application the NVMe option is the best answer. 3 NVMe drives can easily fill the 100 Gb interconnect. We employed that in redundant manner, so we have multiple links between the servers allowing us to raise the performance bar up to 1.5 mln IOPs environment. Of course, the only thing you need is an application, which can push that limit and actually use it.

Easy fit for the Existing Network

Here is a question a lot of people are getting today: how do I employ this new technology in my existing environment, if I have 1 Gb or 10 Gb or 40 Gb Ethernet? There is a similar interconnect between 25, 50 and 100 Gbs in Mellanox product. StarWind complements that fitting into existing networking spec by providing an alternative solution. Even if you don’t have a 40 Gb network in place or 100 Gb network in place, if you want to start your experience with 100 GbE clusters, StarWind provides the minimum starter kit with just two servers and in this case it is possible to interconnect the servers with the direct cable by passing the switch, so you can start using the technology without actually upgrading the entire infrastructure. You can still use the 10 Gb uplinks to connect the cluster to your client or it can be 40, it can be 50, and in the backend StarWind will unleash the whole performance up to 100 GbE on the application level.

Features of StarWind HCA

What are the benefits that StarWind HyperConverged Appliance brings to the datacenter? First of all, it is hyperconverged, and for the SAS environment, like in our case, because we’re talking about NVMe storage, being hyperconverged is the only option to deliver consistently high performance, because a traditional environment, which utilizes dedicated storage, often becomes bottlenecked by the interconnect between storage and compute layers.
In case of hyperconvergence, StarWind talks directly to the local storage, in our case RAM, DRAM and, of course, NVMe. Thus, having anything switching between these sources and the application dramatically drops the performance and increases the latency. We want to omit that, so we put our storage as fault-tolerant application as we can.
Another feature of StarWind is flexibility of protocols we can use. We do iSCSI, and iSER is planned for release shortly. We can also support shared out storage using multiple protocols including SMB3 with SMB Direct. We’re also working on some NVMe over Fabrics technology, so make sure we’re one step ahead of the datacenter infrastructure.
StarWind leverages local RAM as cache. This is the fastest possible link source we can get, so we just try to make use of it and use it as a write-back cache. And RAM is volatile, so we synchronize that cache between the participating servers for maximum reliability.
StarWind has always been known for delivering highly available storage and this is something we keep in the HyperConverged Appliance idea. We mirror the NVMe storage between the servers, we mirror the RAM between the servers, and if there are additional storage resources, we will keep a redundant copy of that on the cluster node to make sure that if the node fails, the application seamlessly and painlessly restarts on the other node or continues to work on the other node. And, of course, there is a feature of asynchronous replication, which is effective Disaster Recovery solution for individual virtual machines or applications or it can be volume-level replication to a public cloud or any DR location. These options are configured in parallel, so you can have either one or all of them at the same time.

Single Pane of Glass Hyper-V Management by 5nine

The HyperConverged Appliance supports multiple hypervisors. With vSphere, there is a big unified management being the vCenter. And in some cases the other alternative Microsoft Hyper-V is used with its System Center Virtual Machine Management, which is a good product with great set of tools. However, it has an increased hardware footprint and some complexity which is not necessary in smaller environments, even though there is tailored or ultra-high performance.
Thus, for smaller environments we use 5nine Hyper-V Manager with StarWind and Veeam integration to make sure we can manage all the components of our appliance from one place. It provides a really neat way to manage our virtual machine, clusters, storage and backup from a single window. It can be centralized, so it can be launched on the appliance as well, delivering better user experience and easier management. And critical part of it is the storage monitoring, especially when we talk about NVMe, because if something goes wrong, we need to replace the storage really fast and make sure that our applications do not suffer the underlying hardware problems.

Veeam Integration

Veeam Integration is a really easy way to monitor your backup jobs, check if anything has failed and the job is finished correctly and also manage the whole backup process from the same window where you already manage your virtual machines or your applications and your storage.

Support of StarWind HCA

One more thing which differentiates StarWind from lots of other options including the so-called do-it-yourself scenario and other hyperconverged options is that we do provide a single-support umbrella for all applications and hardware we provide. Should there be any issue, even if it’s Hyper-V, or vSphere, or StarWind, or Veeam or any other part of the appliance, you contact StarWind, and we solve this problem for you.
That provides for unified single point of contact or, like people tend to say when it comes to support, a single throat to choke. It also removes the finger-pointing issue, which often arises in a multi-vendor environment. When we contact, let’s say, hypervisor support, they say: no, it’s not on hold, it’s for storage provider. Then you contact the storage provider, and the storage provider points that this is actually a hypervisor issue. You can be in that hell circle for hours and hours, or even days, and that’s not something you need when you have lots of money and maybe your whole business. Thus, it’s something we’re trying to remove from our customers’ shoulders and utilize our contacts in the vendor companies to get better support and make sure that customer experience is excellent even when there are multi-vendor issues.

Applications

Here we’ll talk about the applications for such an appliance that people may think “Wow! 1.5 mln IOPs and where may I need that? Why do I need it at all? We’re talking space ships, but we don’t have highways in our country”. The applications may be the really surprising, and a lot of companies are starting to look into implementing such clusters in their complex. Or, previously, they went to universities that had supercomputers for analysis and make sense of the data they’ve got. Now they can do it in their datacenters, so essentially this is called big data, when you not only gather the data, but you also make sense of it and do some good things for your business using that specifics. Let’s say, you could look through the weather forecast and see if the hurricane is going to interrupt your business and based on that make a decision if you need to migrate that business, if you need to evacuate people, if you need to put some appointments with the power company to make sure they will come in and fix the electricity in a timely fashion.
About cloud computing applications, this plan is more obvious. A lot of people are moving their whole production to clouds. On the other side of the cloud, people need to make sure they deliver the highest possible performance for all of their customers. One of the few ways to do it is to employ the NVMe storage, and to make sure that the infrastructure NVMe is supporting is reliable we need to synchronize it. So, here comes StarWind HyperConverged Appliance allowing you to deliver super-high performance and maintain reliability for your customers.
There is lots of use for such appliances in the digital content market where you need to reintegrate, to put the process, and typical tiered solutions don’t work because of the latency and multitenant access issues. And kind of a partial kit of big data is scientific research when people run modeling on computers instead of using real tests, so most of the car vendors use mathematic model in most of their security and safety tests. The ones you see on YouTube with a mannequin hitting a concrete block is just few percent of a huge array of tests performed on computers.
And systems like StarWind HCA can be a foundation for such test environments and actually can deliver the results that can save millions of lives based on the mathematics running on the systems. More interesting and controversial is the economic analysis. The trade can be analyzed, the economic outcome of currency change rates and commodities exchange can be modelled on systems like that really fast and deliver answers, so companies which business is based on trading commodities, can ask: “Do I need to sell it today? Let me check it with a supercomputer”. Supercomputer will tell you the possible outcome. Therefore, the possibilities are endless. As I said, the biggest challenge so far is to start asking our supercomputers the right questions, because they are already here.
Power Your Business With StarWind!
Book Free Online Demo:

Thank you for your application!
We will reply to you shortly.