Starwind iSCSI vs Microsoft NFS

Pure software-based VM-centric and "flash-friendly" VM storage (iSCSI, SMB3 and NFS)

Moderators: anton (staff), Anatoly (staff), Max (staff)

Starwind iSCSI vs Microsoft NFS

Postby DavidMcKnight » Thu Mar 31, 2011 5:30 pm

I’m primarily doing workstation virtualization on VMware’s VSphere. As my nonscientific benchmark, I time how long it takes some of my larger apps to load. In one case when I have the VM on a Starwind ISCSI target it takes 58 seconds to load. When I have a clone of that VM sitting on an NFS share it takes 17 seconds to load. This generally holds true for all my apps. Now the iSCSI target and the NFS share are hosted on the same Server 2008 R2. Both VMs are the only VMs on those particular datastores. The traffic is going out on the same Intel 10gig CX4 connection which is connected to the same Cisco WS-C3560E switch. Both VMs are hosted on the same VMware Host. Both VMs are stored on the same Areca 1880 RAID card at the same RAID level, but on different volumes.

I guess the point I’m trying to make is that only thing that could be causing this big of a difference in performance is iSCSI in general or some specific iSCSI settings. I’ve scoured the net looking for optimal 10Gig iSCSI settings, and have tried most of them. As it stands now I’m getting dramatic performance difference between iSCSI and NFS. Should I be?

If I’m using Starwind and VMware and Intel 10Gigbit is there a check list of setting and their values I should be using to get optimal performance from iSCSI in general and/or Starwind specifically.

BTW, yes I have configured my server to use the “Recommended TCP/IP settings” found in this forum.

Thanks,
User avatar
DavidMcKnight
 
Posts: 35
Joined: Mon Sep 06, 2010 2:59 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby anton (staff) » Thu Mar 31, 2011 8:30 pm

1) Under "Microsoft NFS" we should read "Microsoft SMB", right?

2) No, it does not work as expected. You should see very different picture. Everything reversed from what you say.

3) Could you please tell more about your setup? Do you run HA or non-HA? Do you use StarWind's Write-Back on your target? Cache size? What you use as backend of iSCSI target? Image file? Partition?

4) Now what you do? Have both "shares" (SMB and iSCSI) freshly mounted after initiator machine rebooted (to ensure cache on initiator is not messing the whole thing up) and checking how fast VM is loaded?

Could you please clarify and we'd continue. Thank you!

DavidMcKnight wrote:I’m primarily doing workstation virtualization on VMware’s VSphere. As my nonscientific benchmark, I time how long it takes some of my larger apps to load. In one case when I have the VM on a Starwind ISCSI target it takes 58 seconds to load. When I have a clone of that VM sitting on an NFS share it takes 17 seconds to load. This generally holds true for all my apps. Now the iSCSI target and the NFS share are hosted on the same Server 2008 R2. Both VMs are the only VMs on those particular datastores. The traffic is going out on the same Intel 10gig CX4 connection which is connected to the same Cisco WS-C3560E switch. Both VMs are hosted on the same VMware Host. Both VMs are stored on the same Areca 1880 RAID card at the same RAID level, but on different volumes.

I guess the point I’m trying to make is that only thing that could be causing this big of a difference in performance is iSCSI in general or some specific iSCSI settings. I’ve scoured the net looking for optimal 10Gig iSCSI settings, and have tried most of them. As it stands now I’m getting dramatic performance difference between iSCSI and NFS. Should I be?

If I’m using Starwind and VMware and Intel 10Gigbit is there a check list of setting and their values I should be using to get optimal performance from iSCSI in general and/or Starwind specifically.

BTW, yes I have configured my server to use the “Recommended TCP/IP settings” found in this forum.

Thanks,
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3538
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Re: Starwind iSCSI vs Microsoft NFS

Postby anton (staff) » Thu Mar 31, 2011 8:32 pm

P.S. And you need to configure BOTH server and client as "recommended TCP/IP settings", then make sure your network runs TCP at wire speed and then proceed testing StarWind's RAM disk. Only after making sure you have close-to-wire speed with RAM disk mounted remotely you can proceed with image files or CDP or de-dupe.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3538
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Re: Starwind iSCSI vs Microsoft NFS

Postby DavidMcKnight » Fri Apr 01, 2011 2:16 pm

1) Under "Microsoft NFS" we should read "Microsoft SMB", right?

Well this is what I'm seeing under server manager... The NFS Volume I'm connecting to from VMware is Volume-03-03.

Image

3) Do you run HA or non-HA? This datastore is NOT doing any HA at the moment.

Do you use StarWind's Write-Back on your target? Cache size? What you use as backend of iSCSI target? Image file? Partition?

Here are the Starwind Setting for my iSCSI Target Volume-03-02
Image

Here are my Starwind Advanced Settings:
Image

Here are my ESXi iSCSI Initiator Advanced Settings:
Code: Select all
Head Digest: Prohibited
Data Digest: Prohibited
ErrorRecoveryLevel: 0
LoginRetryMax: 4
MaxOutstandingR2T: 1
FirstBurstLength: 262144
MaxBurstLength: 262144
MaxRecvDataSegLen: 131072
MaxCommands: 128
DefaultTimeToWait: 2
DefaultTimeToRetain: 0
LoginTimeout: 15
LogoutTimeout: 15
RecoveryTimeout: 10
NoopTimeout: 10
NoopInterval: 15
InitR2T: False
ImmediateData: True
DelayedAck: True
User avatar
DavidMcKnight
 
Posts: 35
Joined: Mon Sep 06, 2010 2:59 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby @ziz (staff) » Fri Apr 01, 2011 2:29 pm

Since you are noticing the difference of performance when using "Disk Bridge" you should first check the performance of your network link between your StarWind server and your ESX server, if you will get the full bandwidth your link can give you can go further and test RAM device, create a RAM device and connect it to a VM on your ESX and test performance using such tools like IOmeter or ATTO...etc.
Please, refer to our benchmarking servers document to learn more about different tests you should run to check your system's performance. Follow the link to download the document: http://www.starwindsoftware.com/benchma ... vers-guide
Please notice that if you will use windows VM for testing it should be have the TCP/IP tweaks applied too.
Aziz Keissi
Technical Engineer
StarWind Software
@ziz (staff)
 
Posts: 57
Joined: Wed Aug 18, 2010 3:44 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby @ziz (staff) » Fri Apr 01, 2011 2:32 pm

By the way, you are using cache size assigned by default in StarWind Console, try to have more cache in write-back mode: 512Mb for example.
Aziz Keissi
Technical Engineer
StarWind Software
@ziz (staff)
 
Posts: 57
Joined: Wed Aug 18, 2010 3:44 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby DavidMcKnight » Fri Apr 01, 2011 2:40 pm

@ziz (staff) wrote:Since you are noticing the difference of performance when using "Disk Bridge" you should first check the performance of your network link between your StarWind server and your ESX server, if you will get the full bandwidth your link can give you can go further and test RAM device, create a RAM device and connect it to a VM on your ESX and test performance using such tools like IOmeter or ATTO...etc.
Please, refer to our benchmarking servers document to learn more about different tests you should run to check your system's performance. Follow the link to download the document: http://www.starwindsoftware.com/benchma ... vers-guide
Please notice that if you will use windows VM for testing it should be have the TCP/IP tweaks applied too.


Like my original post said I am benchmarking... I know I my network can go faster because when I have a VM on NFS it's 3 times faster. So until I can load apps at about the same speed on both VMs I know it's not a network link issue.
User avatar
DavidMcKnight
 
Posts: 35
Joined: Mon Sep 06, 2010 2:59 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby @ziz (staff) » Fri Apr 01, 2011 2:52 pm

Ok, it is good that you know that your network can go faster but I need numbers I can refer to. It takes maximum 5mn to test it using NTTtcp or iPerf... And test results for RAM device using ATTO in Direct I/O, overlapped I/O with maximum queue depth.
For your Disk bridge device, try to create it with more cache in WB and without cache and compare the results, again with a benchmarking tool.
Aziz Keissi
Technical Engineer
StarWind Software
@ziz (staff)
 
Posts: 57
Joined: Wed Aug 18, 2010 3:44 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby DavidMcKnight » Fri Apr 01, 2011 3:45 pm

Understood. I will run the tests as soon as I can and post the results.

In the mean time, I have a dumb question and I can't find the answer anywhere on Starwind's site.

I own a license of Starwind and I'm still within my 1 year maintenance agreement. What web page do I go to to see what the current public version/release is of Starwind (So I can easily tell if I am running the latest release)? What page do I go to to download the current build of Starwind? Yes, I can download the demo, but is there not I place I can log into and download it from?
User avatar
DavidMcKnight
 
Posts: 35
Joined: Mon Sep 06, 2010 2:59 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby @ziz (staff) » Mon Apr 04, 2011 7:20 am

DavidMcKnight wrote:Understood. I will run the tests as soon as I can and post the results.

In the mean time, I have a dumb question and I can't find the answer anywhere on Starwind's site.

I own a license of Starwind and I'm still within my 1 year maintenance agreement. What web page do I go to to see what the current public version/release is of Starwind (So I can easily tell if I am running the latest release)? What page do I go to to download the current build of Starwind? Yes, I can download the demo, but is there not I place I can log into and download it from?


Ok, will wait for your results.
Information about newest releases of StarWind is sent to all our customer via email. This same information is available here: starwind-f5/starwind-the-most-recent-version-t2069.html
The newest version can be downloaded by this link http://www.starwindsoftware.com/download-free-trial ; use your credentials to login.
You can always contact technical support team and they will be glad to guide you.
Aziz Keissi
Technical Engineer
StarWind Software
@ziz (staff)
 
Posts: 57
Joined: Wed Aug 18, 2010 3:44 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby DavidMcKnight » Wed Apr 06, 2011 3:13 pm

In the interest of full disclosure I think I found the "Ya but" problem with Microsoft NFS. Microsoft's NFS has terrible write speeds with vSphere. Even though I get 4 times the read performance, I think I get 1/10th the write speeds. According to my research this is because when vSphere writes to an NFS shares, it tells the datastore not to acknowledge the write request until the data is completely written to the disk drives. Writing to the datastore's cache isn't good enough, and there doesn't seem to be any way around it when using the NFS services included in Server 2008 R2. But, supposedly if I were to switch to a real linux solution I could put some settings in to fix the NFS write issues.

Setting that aside the question that I still have is why is NFS's read speeds so much better than iSCSI's? I think I'll just file a support request and see if I can get someone to look over my setup and hope they can see what my problem is.
User avatar
DavidMcKnight
 
Posts: 35
Joined: Mon Sep 06, 2010 2:59 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby anton (staff) » Wed Apr 06, 2011 3:35 pm

For now guys are investigating this issue locally but please stay tuned and be ready for someone from our team to jump on your hardware head remotely.

DavidMcKnight wrote:In the interest of full disclosure I think I found the "Ya but" problem with Microsoft NFS. Microsoft's NFS has terrible write speeds with vSphere. Even though I get 4 times the read performance, I think I get 1/10th the write speeds. According to my research this is because when vSphere writes to an NFS shares, it tells the datastore not to acknowledge the write request until the data is completely written to the disk drives. Writing to the datastore's cache isn't good enough, and there doesn't seem to be any way around it when using the NFS services included in Server 2008 R2. But, supposedly if I were to switch to a real linux solution I could put some settings in to fix the NFS write issues.

Setting that aside the question that I still have is why is NFS's read speeds so much better than iSCSI's? I think I'll just file a support request and see if I can get someone to look over my setup and hope they can see what my problem is.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3538
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Re: Starwind iSCSI vs Microsoft NFS

Postby Bohdan (staff) » Tue Apr 12, 2011 9:41 am

Hi,
NFS is cached and StarWind by default is not.
Here are our results.

Server:
CPU Intel Xeon E5430
MB Intel s5000vsa
RAM 8 GB
HDD st31000528as
OS Windows 2008 R2 SP1

Client:
CPU Intel Xeon x3470
MB Intel S3420GPLC
RAM 4 GB
OS ESX 4.1

ATTO Disk benchmark tests

Local HDD test:
Image

NFS:
Image

DiskBridge with WB cache 64MB
Image

ImageFile with WB cache 64MB
Image

DiskBridge with WB cache 2048MB
Image

ImageFile with WB cache 2048MB
Image

VM OS start test

VM configuration:
Ram 1 GB
OS Win 2008R2 SP1
HDD 15 GB

Image
User avatar
Bohdan (staff)
Staff
 
Posts: 434
Joined: Wed May 23, 2007 12:58 pm

Re: Starwind iSCSI vs Microsoft NFS

Postby Aitor_Ibarra » Tue Apr 12, 2011 1:04 pm

Hi,

Bohdan - your ATTO results look like you're running both NFS and Starwind over 1G, not 10G? As they max out around 100MB/sec, yet your drives are capable of 125 - 130MB/sec. With 10G, and the cache levels you are giving Starwind, I get much better results; in fact with an ATTO test that fits into cache, using a non-HA target on 5.5, I was able to get 1GB/sec speeds over 10G.

David - I use Intel 10G too... I would recommend you look at the Receive Side Scaling settings, on both Starwind and the clients, as they made a big difference for me in testing

My iSCSI clients (Windows 2008 R2 SP1 running Hyper-V) are currently set like this:
ProSet 15.7.176.0 (i.e. driver downloaded from Intel)
Advanced: Profile: Virtualisation Server (Hyper-V)
Advanced: Settings: Header Data Split : Enabled
Advanced: Interrupt Moderation : Enabled
Advanced: Jumbo Packet: 9014 Bytes
Advanced: Large Send Offload (IPv4) : Enabled
Advanced: Large Send Offload (IPv6) : Enabled
Advanced: Maximum number of RSS Processors : 16
Advanced: Performance Options: Flow Control: Rx & Tx Enabled
Advanced: Performance Options: Interrupt Moderation Rate: Adaptive
Advanced: Performance Options: Low Latency Interrupts : not set
Advanced: Performance Options: Receive Buffers : 512 (but maybe try increasing this)
Advanced: Performance Options: Transmit Buffers : 512 (but maybe try increasing this)
Advanced: Preferred NUMA node : System Default
Advanced: Priority & VLAN : Priority & VLAN enabled
Advanced: Receive Side Scaling : Enabled
Advanced: Receive Side Scaling Queues : 4 Queues (maybe do more if you have the cpu cores)
Advanced: Starting RSS CPU : 0
Advanced: TCP/IP Offloading Options : everything enabled
Advanced: Virtual Machine Queues : enabled (although not actually used, as the adapter is used exclusively for iSCSI)

One Starwind boxes is set like this (note: these are the host settings, Starwind is actually running in a VM)
Driver: 2.4.36.0 (Microsoft supplied driver, because Intel driver refused to install on clean install of R2 SP1)
Advanced: Flow Control : Rx & TX Enabled
Advanced: Interrupt Moderation : Enabled
Advanced: Interrupt Moderation Rate : Adaptive
Advanced: IPv4 Checksum Offload: Rx & TX Enabled
Advanced: Jumbo Packet: 9014 Bytes
Advanced: Large Send Offload (IPv4) : Enabled
Advanced: Large Send Offload (IPv6) : Enabled
Advanced: Maximum number of RSS Processors : 16
Advanced: Preferred NUMA node : System Default
Advanced: Priority & VLAN : Priority & VLAN enabled
Advanced: Receive Side Scaling : Enabled
Advanced: Receive Side Scaling Queues : 8 Queues
Advanced: Starting RSS CPU : 0
Advanced: TCP Checksum Offload(IPv4) : Rx & TX Enabled
Advanced: TCP Checksum Offload(IPv6) : Rx & TX Enabled
Advanced: Transmit Buffers: 512 (maybe I should increase)
Advanced: UDP Checksum Offload(IPv4):Rx & TX Enabled
Advanced: UDP Checksum Offload(IPv6):Rx & TX Enabled
Advanced: Virtual Machine Queues: Disabled (I will be enabling them though)

The other sever is using ProSet 14.5.1.0 - I really need to upgrade it, but waiting for a driver that will install OK on SP1.

Hope this helps,

Aitor
User avatar
Aitor_Ibarra
 
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Re: Starwind iSCSI vs Microsoft NFS

Postby Bohdan (staff) » Tue Apr 12, 2011 1:24 pm

Hi Aitor,

Yes, the NICs are PRO/1000 PT.
Our 10GB cards are in use now, these were just quick tests.
Official Intel suggestions are available on the following links:
http://www.intel.com/support/network/sb/cs-025829.htm
http://www.intel.com/support/network/ad ... be+options
User avatar
Bohdan (staff)
Staff
 
Posts: 434
Joined: Wed May 23, 2007 12:58 pm

Next

Return to StarWind Virtual SAN

Who is online

Users browsing this forum: No registered users and 3 guests