Testing Starwind "speeds" with SSD

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
User avatar
awedio
Posts: 89
Joined: Sat Sep 04, 2010 5:49 pm

Thu May 17, 2012 5:26 pm

I would like to perform some simple tests comparing a few different flavors of iSCSI SAN software

I'm primarily interested in comparing performance (aka "speed" - IOPS, installing o/s, copying files etc). Dedup will not be part of this test.

After reading these (see links below), I disagree with the idea of installing using a single HDD. No one that buys these products will ever install using a single hard drive.

http://blogs.jakeandjessica.net/post/20 ... art-1.aspx
http://www.starwindsoftware.com/starwin ... est-report

Question for Anton & the crew: Given the equipment that I have (see list)
1) To achieve max performance: how should the SSD be used
2) What version of Starwind should I use for testing
3) Pls feel free to add any other relevant info

Equipment list:

Head unit:
Chassis: SM SC826-R800LPB (No backplane)
Motherboard: SM X8DTH-6F
CPU: 2x Intel Xeon E5645 6 core
HBA: 1x LSI 9280-8E
RAM: 12 x 8GB Crucial ECC
NIC: 6x Intel EXPX9502CX4 10GBe
Boot SSD: 2x Samsung 830 128GB SSD (mirrored)

Image
Image

JBOD 1: SuperMicro SC216E26-R1200LPB
Slots 0: STEC SLC SSD
Slots 1-3: STEC MLC SSD
Slots 4-19: Seagate ST91000640SS (1TB 7200 rpm 6Gbs SAS 2.5”)
Slots 20-23: reserved for future ST91000640SS (as budget allows)

JBOD 2: SuperMicro SC216E26-R1200LPB
Slots 0-23: reserved for future ST91000640SS (as budget allows)

JBOD 3: SuperMicro SC826E16-R800LPB
Slots 0-11: Seagate ST31000640SS (1TB 7200 rpm 3Gbs SAS 3.5”)
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri May 18, 2012 1:06 pm

A) It does not matter what storage you use. If some software fails because it has no memory to go it would happen with any type of underlying storage.

B) It's nice you're not having plans to test dedupe but I have some bad news for you :) After V6 we'll offer single unified engine for deduplicated and non-deduplicated data
(in a nutshell: for non-dedupe mode collisions checking will be turned OFF saving RAM for L1 cache instead of it being used for dedupe hash tables). Reason is simple: log-structured
file system provides GREAT benefits for writes doing magic for particular load patterns). We'll preserve RAW images of course but they will be not approach dedupe engine even close.

1) Full SSD mode for now. V6 will have tiering and L2 SSD cache but we don't have public beta for these features yet.
2) Current one should be fine if you're using single head + write back cache mode. For dual head we do recommend V5.9 beta as HA engine has been re-worked for performance once again.
3) For now just allocate FLAT image file as a VM container and use a lot of write-back cache. That's the way to extract maximum performance IOPS and MBPS.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Fri May 18, 2012 1:30 pm

Performance and tuning really depends on your use case, which you haven't really explained. However, I can guess a bit from your hardware spec...

1) You have only one HBA which has 2 x4 SAS connectors, each channel capable of a max bandwidth of 600MB/sec full duplex. Therefore you have up to 600x4x2 = 4800MB/sec to your storage. However, PCIe limitations (x8 PCIe 2.0) and the limits of the card are going to mean you are limited to more lile 3000MB/sec max IIRC.
2) You have 6x2 = 12 10GbE ports - each one capable of up to 1000MB/sec bandwidth to your iSCSI intiators (clients). So you have a total of 12000 MB/sec to your clients
3) You have 96GB of RAM which could be mainly used as cache by Starwind

That motherboard has enough PCIe bandwidth to not be a bottleneck so in theory you should be able to saturate your SAS and 10GbE if you are capable of providing enough demand from your initiators. Unfortunately your storage, in most workloads, isn't going to saturate your single HBA, which in turn isn't going to provide enough bandwifth to saturate your Ethernet connections, so unless most of your workload is in WB cache you will not be able to saturate all those 10GbE connections.

I don't know what those STEC SSDs are capable of, but the most SAS can do is 600MB/sec per drive. Maybe do your most write-IOP intensive stuff on the SLC and then do a RAID-0 or RAID-1 + Hotspare of the MLCs for slightly less IOP heavy.

If your workload is mostly sequential access of highly random data - e.g. uncompressed HD video - then I would put the hard drives into a big RAID (0 for speed & capacity, 10 for speed & reliability). If the reason you have 12 10GbE ports is that you have 12 clients and they could all be accessing different data, then use multiple, smaller RAIDs.

If your data access is highly random but the most accessed subset of the data can fit on your SSDs or even your RAM, then use them with LSI CacheCade, so you have a double layer cache of 96GB RAM -> xxGB SSD -> xxTB HD.

If there is a high level of duplication - e.g. you are going to run a bunch of VMs running the same OS, I strongly suggest you look at dedupe as it will make better use of your RAM/CPU.

Hope this helps.

EDIT: who made me Superman?! It's not very accurate, I have many more weaknesses than just kryptonite...
User avatar
awedio
Posts: 89
Joined: Sat Sep 04, 2010 5:49 pm

Fri May 18, 2012 1:49 pm

I'm really trying to make this a simple test.

Single Hyper-V host, hosting a few vm's with iSCSI storage connected via 10GbE.

Some of the hardware shown was chosen since I plan to also test ZFS/Nexenta.

The 12x 10GbE ports is more for show, it looks good in the pics!

Aitor, any ideas on how one can "saturate" the HBA link?
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Sun May 20, 2012 12:17 pm

Any updates here please?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
User avatar
awedio
Posts: 89
Joined: Sat Sep 04, 2010 5:49 pm

Sun May 20, 2012 5:26 pm

No updates yet, was hoping to "acquire" a few more Seagate 1TB NL-SAS drives.

These 2.5" drives are not easy to find at a reasonable price!!

Hope to start later this week!!
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue May 22, 2012 5:21 am

1) It's even worser... IOPS don't scale the same way MB/sec do.

2) 3-4 10 GbE NICs doing full speed wire transfers will kill PCIe bus (and CPU). Six will do it for sure :)

3) Yup...

I'd start from a smaller test-bed configuration using a couple of SAS spindles, a lot of RAM for WB cache and a pair of 10 GbE NICs as an attachment.
Adding components one-by-one to see how the whole thing will scale and where limit is - software, CPU, PCIe bus or network.

IMHO of course.
Aitor_Ibarra wrote:Performance and tuning really depends on your use case, which you haven't really explained. However, I can guess a bit from your hardware spec...

1) You have only one HBA which has 2 x4 SAS connectors, each channel capable of a max bandwidth of 600MB/sec full duplex. Therefore you have up to 600x4x2 = 4800MB/sec to your storage. However, PCIe limitations (x8 PCIe 2.0) and the limits of the card are going to mean you are limited to more lile 3000MB/sec max IIRC.
2) You have 6x2 = 12 10GbE ports - each one capable of up to 1000MB/sec bandwidth to your iSCSI intiators (clients). So you have a total of 12000 MB/sec to your clients
3) You have 96GB of RAM which could be mainly used as cache by Starwind

That motherboard has enough PCIe bandwidth to not be a bottleneck so in theory you should be able to saturate your SAS and 10GbE if you are capable of providing enough demand from your initiators. Unfortunately your storage, in most workloads, isn't going to saturate your single HBA, which in turn isn't going to provide enough bandwifth to saturate your Ethernet connections, so unless most of your workload is in WB cache you will not be able to saturate all those 10GbE connections.

I don't know what those STEC SSDs are capable of, but the most SAS can do is 600MB/sec per drive. Maybe do your most write-IOP intensive stuff on the SLC and then do a RAID-0 or RAID-1 + Hotspare of the MLCs for slightly less IOP heavy.

If your workload is mostly sequential access of highly random data - e.g. uncompressed HD video - then I would put the hard drives into a big RAID (0 for speed & capacity, 10 for speed & reliability). If the reason you have 12 10GbE ports is that you have 12 clients and they could all be accessing different data, then use multiple, smaller RAIDs.

If your data access is highly random but the most accessed subset of the data can fit on your SSDs or even your RAM, then use them with LSI CacheCade, so you have a double layer cache of 96GB RAM -> xxGB SSD -> xxTB HD.

If there is a high level of duplication - e.g. you are going to run a bunch of VMs running the same OS, I strongly suggest you look at dedupe as it will make better use of your RAM/CPU.

Hope this helps.

EDIT: who made me Superman?! It's not very accurate, I have many more weaknesses than just kryptonite...
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue May 22, 2012 5:23 am

I did! I always pick up an avatars for people who stick with this community (make some level of posts). Trying to keep them as personalized as possible :)
Aitor_Ibarra wrote:
...

EDIT: who made me Superman?! It's not very accurate, I have many more weaknesses than just kryptonite...
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue May 22, 2012 5:27 am

Please keep us posted and don't miss my hint about starting a bit smaller then you want. To check how the whole thing (things? ZFS and StarWind side-by-side) will scale.

We'll provide you with some marketing back-end help linking to your blog with the results and tweeting what you have. Does not matter who'd be a winner :)

Thank you!
awedio wrote:No updates yet, was hoping to "acquire" a few more Seagate 1TB NL-SAS drives.

These 2.5" drives are not easy to find at a reasonable price!!

Hope to start later this week!!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
awedio
Posts: 89
Joined: Sat Sep 04, 2010 5:49 pm

Tue May 22, 2012 6:04 am

Anton,

I will definitely keep you posted.

Slight delay, as I have to return the STEC 200GB SLC SSD sample (Z16IZF2E-200UCU).

It's replacement will be the 100GB version (Z16IZF2E-100UCU)
User avatar
awedio
Posts: 89
Joined: Sat Sep 04, 2010 5:49 pm

Tue May 22, 2012 6:05 am

Is my Avatar Darth Vader?

btw, is Anatoly any good at soccer?
anton (staff) wrote:I did! I always pick up an avatars for people who stick with this community (make some level of posts). Trying to keep them as personalized as possible :)
Aitor_Ibarra wrote:
...

EDIT: who made me Superman?! It's not very accurate, I have many more weaknesses than just kryptonite...
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue May 22, 2012 6:09 am

These are good ones! At least you're not going to blow your SSDs with your performance tests :) BTW, make sure you don't fill them more then 50% as they will have performance degradation.
awedio wrote:Anton,

I will definitely keep you posted.

Slight delay, as I have to return the STEC 200GB SLC SSD sample (Z16IZF2E-200UCU).

It's replacement will be the 100GB version (Z16IZF2E-100UCU)
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue May 22, 2012 6:10 am

No, that's what Google gave for your nickname.

AFAIK yes, but it was his own selection not mine.
awedio wrote:Is my Avatar Darth Vader?

btw, is Anatoly any good at soccer?
anton (staff) wrote:I did! I always pick up an avatars for people who stick with this community (make some level of posts). Trying to keep them as personalized as possible :)
Aitor_Ibarra wrote:
...

EDIT: who made me Superman?! It's not very accurate, I have many more weaknesses than just kryptonite...
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
awedio
Posts: 89
Joined: Sat Sep 04, 2010 5:49 pm

Tue May 22, 2012 5:51 pm

anton (staff) wrote:These are good ones! At least you're not going to blow your SSDs with your performance tests :) BTW, make sure you don't fill them more then 50% as they will have performance degradation.
50% is not a problem for these STEC SSDs (reading from their whitepaper)
According to STEC, this is what separates good SSDs from great SSDs

http://www.stec-inc.com/downloads/white ... e_SSDs.pdf
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed May 23, 2012 3:02 pm

Great white paper! Very structured approach. Please keep us updated :)
awedio wrote:
anton (staff) wrote:These are good ones! At least you're not going to blow your SSDs with your performance tests :) BTW, make sure you don't fill them more then 50% as they will have performance degradation.
50% is not a problem for these STEC SSDs (reading from their whitepaper)
According to STEC, this is what separates good SSDs from great SSDs

http://www.stec-inc.com/downloads/white ... e_SSDs.pdf
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply