Page 1 of 2

Starwind iSCSI vs Microsoft NFS

Posted: Thu Mar 31, 2011 5:30 pm
by DavidMcKnight
I’m primarily doing workstation virtualization on VMware’s VSphere. As my nonscientific benchmark, I time how long it takes some of my larger apps to load. In one case when I have the VM on a Starwind ISCSI target it takes 58 seconds to load. When I have a clone of that VM sitting on an NFS share it takes 17 seconds to load. This generally holds true for all my apps. Now the iSCSI target and the NFS share are hosted on the same Server 2008 R2. Both VMs are the only VMs on those particular datastores. The traffic is going out on the same Intel 10gig CX4 connection which is connected to the same Cisco WS-C3560E switch. Both VMs are hosted on the same VMware Host. Both VMs are stored on the same Areca 1880 RAID card at the same RAID level, but on different volumes.

I guess the point I’m trying to make is that only thing that could be causing this big of a difference in performance is iSCSI in general or some specific iSCSI settings. I’ve scoured the net looking for optimal 10Gig iSCSI settings, and have tried most of them. As it stands now I’m getting dramatic performance difference between iSCSI and NFS. Should I be?

If I’m using Starwind and VMware and Intel 10Gigbit is there a check list of setting and their values I should be using to get optimal performance from iSCSI in general and/or Starwind specifically.

BTW, yes I have configured my server to use the “Recommended TCP/IP settings” found in this forum.

Thanks,

Re: Starwind iSCSI vs Microsoft NFS

Posted: Thu Mar 31, 2011 8:30 pm
by anton (staff)
1) Under "Microsoft NFS" we should read "Microsoft SMB", right?

2) No, it does not work as expected. You should see very different picture. Everything reversed from what you say.

3) Could you please tell more about your setup? Do you run HA or non-HA? Do you use StarWind's Write-Back on your target? Cache size? What you use as backend of iSCSI target? Image file? Partition?

4) Now what you do? Have both "shares" (SMB and iSCSI) freshly mounted after initiator machine rebooted (to ensure cache on initiator is not messing the whole thing up) and checking how fast VM is loaded?

Could you please clarify and we'd continue. Thank you!
DavidMcKnight wrote:I’m primarily doing workstation virtualization on VMware’s VSphere. As my nonscientific benchmark, I time how long it takes some of my larger apps to load. In one case when I have the VM on a Starwind ISCSI target it takes 58 seconds to load. When I have a clone of that VM sitting on an NFS share it takes 17 seconds to load. This generally holds true for all my apps. Now the iSCSI target and the NFS share are hosted on the same Server 2008 R2. Both VMs are the only VMs on those particular datastores. The traffic is going out on the same Intel 10gig CX4 connection which is connected to the same Cisco WS-C3560E switch. Both VMs are hosted on the same VMware Host. Both VMs are stored on the same Areca 1880 RAID card at the same RAID level, but on different volumes.

I guess the point I’m trying to make is that only thing that could be causing this big of a difference in performance is iSCSI in general or some specific iSCSI settings. I’ve scoured the net looking for optimal 10Gig iSCSI settings, and have tried most of them. As it stands now I’m getting dramatic performance difference between iSCSI and NFS. Should I be?

If I’m using Starwind and VMware and Intel 10Gigbit is there a check list of setting and their values I should be using to get optimal performance from iSCSI in general and/or Starwind specifically.

BTW, yes I have configured my server to use the “Recommended TCP/IP settings” found in this forum.

Thanks,

Re: Starwind iSCSI vs Microsoft NFS

Posted: Thu Mar 31, 2011 8:32 pm
by anton (staff)
P.S. And you need to configure BOTH server and client as "recommended TCP/IP settings", then make sure your network runs TCP at wire speed and then proceed testing StarWind's RAM disk. Only after making sure you have close-to-wire speed with RAM disk mounted remotely you can proceed with image files or CDP or de-dupe.

Re: Starwind iSCSI vs Microsoft NFS

Posted: Fri Apr 01, 2011 2:16 pm
by DavidMcKnight
1) Under "Microsoft NFS" we should read "Microsoft SMB", right?

Well this is what I'm seeing under server manager... The NFS Volume I'm connecting to from VMware is Volume-03-03.

Image

3) Do you run HA or non-HA? This datastore is NOT doing any HA at the moment.

Do you use StarWind's Write-Back on your target? Cache size? What you use as backend of iSCSI target? Image file? Partition?

Here are the Starwind Setting for my iSCSI Target Volume-03-02
Image

Here are my Starwind Advanced Settings:
Image

Here are my ESXi iSCSI Initiator Advanced Settings:

Code: Select all

Head Digest: Prohibited
Data Digest: Prohibited
ErrorRecoveryLevel: 0
LoginRetryMax: 4
MaxOutstandingR2T: 1
FirstBurstLength: 262144
MaxBurstLength: 262144
MaxRecvDataSegLen: 131072
MaxCommands: 128
DefaultTimeToWait: 2
DefaultTimeToRetain: 0
LoginTimeout: 15
LogoutTimeout: 15
RecoveryTimeout: 10
NoopTimeout: 10
NoopInterval: 15
InitR2T: False
ImmediateData: True
DelayedAck: True

Re: Starwind iSCSI vs Microsoft NFS

Posted: Fri Apr 01, 2011 2:29 pm
by @ziz (staff)
Since you are noticing the difference of performance when using "Disk Bridge" you should first check the performance of your network link between your StarWind server and your ESX server, if you will get the full bandwidth your link can give you can go further and test RAM device, create a RAM device and connect it to a VM on your ESX and test performance using such tools like IOmeter or ATTO...etc.
Please, refer to our benchmarking servers document to learn more about different tests you should run to check your system's performance. Follow the link to download the document: http://www.starwindsoftware.com/benchma ... vers-guide
Please notice that if you will use windows VM for testing it should be have the TCP/IP tweaks applied too.

Re: Starwind iSCSI vs Microsoft NFS

Posted: Fri Apr 01, 2011 2:32 pm
by @ziz (staff)
By the way, you are using cache size assigned by default in StarWind Console, try to have more cache in write-back mode: 512Mb for example.

Re: Starwind iSCSI vs Microsoft NFS

Posted: Fri Apr 01, 2011 2:40 pm
by DavidMcKnight
@ziz (staff) wrote:Since you are noticing the difference of performance when using "Disk Bridge" you should first check the performance of your network link between your StarWind server and your ESX server, if you will get the full bandwidth your link can give you can go further and test RAM device, create a RAM device and connect it to a VM on your ESX and test performance using such tools like IOmeter or ATTO...etc.
Please, refer to our benchmarking servers document to learn more about different tests you should run to check your system's performance. Follow the link to download the document: http://www.starwindsoftware.com/benchma ... vers-guide
Please notice that if you will use windows VM for testing it should be have the TCP/IP tweaks applied too.
Like my original post said I am benchmarking... I know I my network can go faster because when I have a VM on NFS it's 3 times faster. So until I can load apps at about the same speed on both VMs I know it's not a network link issue.

Re: Starwind iSCSI vs Microsoft NFS

Posted: Fri Apr 01, 2011 2:52 pm
by @ziz (staff)
Ok, it is good that you know that your network can go faster but I need numbers I can refer to. It takes maximum 5mn to test it using NTTtcp or iPerf... And test results for RAM device using ATTO in Direct I/O, overlapped I/O with maximum queue depth.
For your Disk bridge device, try to create it with more cache in WB and without cache and compare the results, again with a benchmarking tool.

Re: Starwind iSCSI vs Microsoft NFS

Posted: Fri Apr 01, 2011 3:45 pm
by DavidMcKnight
Understood. I will run the tests as soon as I can and post the results.

In the mean time, I have a dumb question and I can't find the answer anywhere on Starwind's site.

I own a license of Starwind and I'm still within my 1 year maintenance agreement. What web page do I go to to see what the current public version/release is of Starwind (So I can easily tell if I am running the latest release)? What page do I go to to download the current build of Starwind? Yes, I can download the demo, but is there not I place I can log into and download it from?

Re: Starwind iSCSI vs Microsoft NFS

Posted: Mon Apr 04, 2011 7:20 am
by @ziz (staff)
DavidMcKnight wrote:Understood. I will run the tests as soon as I can and post the results.

In the mean time, I have a dumb question and I can't find the answer anywhere on Starwind's site.

I own a license of Starwind and I'm still within my 1 year maintenance agreement. What web page do I go to to see what the current public version/release is of Starwind (So I can easily tell if I am running the latest release)? What page do I go to to download the current build of Starwind? Yes, I can download the demo, but is there not I place I can log into and download it from?
Ok, will wait for your results.
Information about newest releases of StarWind is sent to all our customer via email. This same information is available here: http://www.starwindsoftware.com/forums/ ... t2069.html
The newest version can be downloaded by this link http://www.starwindsoftware.com/download-free-trial ; use your credentials to login.
You can always contact technical support team and they will be glad to guide you.

Re: Starwind iSCSI vs Microsoft NFS

Posted: Wed Apr 06, 2011 3:13 pm
by DavidMcKnight
In the interest of full disclosure I think I found the "Ya but" problem with Microsoft NFS. Microsoft's NFS has terrible write speeds with vSphere. Even though I get 4 times the read performance, I think I get 1/10th the write speeds. According to my research this is because when vSphere writes to an NFS shares, it tells the datastore not to acknowledge the write request until the data is completely written to the disk drives. Writing to the datastore's cache isn't good enough, and there doesn't seem to be any way around it when using the NFS services included in Server 2008 R2. But, supposedly if I were to switch to a real linux solution I could put some settings in to fix the NFS write issues.

Setting that aside the question that I still have is why is NFS's read speeds so much better than iSCSI's? I think I'll just file a support request and see if I can get someone to look over my setup and hope they can see what my problem is.

Re: Starwind iSCSI vs Microsoft NFS

Posted: Wed Apr 06, 2011 3:35 pm
by anton (staff)
For now guys are investigating this issue locally but please stay tuned and be ready for someone from our team to jump on your hardware head remotely.
DavidMcKnight wrote:In the interest of full disclosure I think I found the "Ya but" problem with Microsoft NFS. Microsoft's NFS has terrible write speeds with vSphere. Even though I get 4 times the read performance, I think I get 1/10th the write speeds. According to my research this is because when vSphere writes to an NFS shares, it tells the datastore not to acknowledge the write request until the data is completely written to the disk drives. Writing to the datastore's cache isn't good enough, and there doesn't seem to be any way around it when using the NFS services included in Server 2008 R2. But, supposedly if I were to switch to a real linux solution I could put some settings in to fix the NFS write issues.

Setting that aside the question that I still have is why is NFS's read speeds so much better than iSCSI's? I think I'll just file a support request and see if I can get someone to look over my setup and hope they can see what my problem is.

Re: Starwind iSCSI vs Microsoft NFS

Posted: Tue Apr 12, 2011 9:41 am
by Bohdan (staff)
Hi,
NFS is cached and StarWind by default is not.
Here are our results.

Server:
CPU Intel Xeon E5430
MB Intel s5000vsa
RAM 8 GB
HDD st31000528as
OS Windows 2008 R2 SP1

Client:
CPU Intel Xeon x3470
MB Intel S3420GPLC
RAM 4 GB
OS ESX 4.1

ATTO Disk benchmark tests

Local HDD test:
Image

NFS:
Image

DiskBridge with WB cache 64MB
Image

ImageFile with WB cache 64MB
Image

DiskBridge with WB cache 2048MB
Image

ImageFile with WB cache 2048MB
Image

VM OS start test

VM configuration:
Ram 1 GB
OS Win 2008R2 SP1
HDD 15 GB

Image

Re: Starwind iSCSI vs Microsoft NFS

Posted: Tue Apr 12, 2011 1:04 pm
by Aitor_Ibarra
Hi,

Bohdan - your ATTO results look like you're running both NFS and Starwind over 1G, not 10G? As they max out around 100MB/sec, yet your drives are capable of 125 - 130MB/sec. With 10G, and the cache levels you are giving Starwind, I get much better results; in fact with an ATTO test that fits into cache, using a non-HA target on 5.5, I was able to get 1GB/sec speeds over 10G.

David - I use Intel 10G too... I would recommend you look at the Receive Side Scaling settings, on both Starwind and the clients, as they made a big difference for me in testing

My iSCSI clients (Windows 2008 R2 SP1 running Hyper-V) are currently set like this:
ProSet 15.7.176.0 (i.e. driver downloaded from Intel)
Advanced: Profile: Virtualisation Server (Hyper-V)
Advanced: Settings: Header Data Split : Enabled
Advanced: Interrupt Moderation : Enabled
Advanced: Jumbo Packet: 9014 Bytes
Advanced: Large Send Offload (IPv4) : Enabled
Advanced: Large Send Offload (IPv6) : Enabled
Advanced: Maximum number of RSS Processors : 16
Advanced: Performance Options: Flow Control: Rx & Tx Enabled
Advanced: Performance Options: Interrupt Moderation Rate: Adaptive
Advanced: Performance Options: Low Latency Interrupts : not set
Advanced: Performance Options: Receive Buffers : 512 (but maybe try increasing this)
Advanced: Performance Options: Transmit Buffers : 512 (but maybe try increasing this)
Advanced: Preferred NUMA node : System Default
Advanced: Priority & VLAN : Priority & VLAN enabled
Advanced: Receive Side Scaling : Enabled
Advanced: Receive Side Scaling Queues : 4 Queues (maybe do more if you have the cpu cores)
Advanced: Starting RSS CPU : 0
Advanced: TCP/IP Offloading Options : everything enabled
Advanced: Virtual Machine Queues : enabled (although not actually used, as the adapter is used exclusively for iSCSI)

One Starwind boxes is set like this (note: these are the host settings, Starwind is actually running in a VM)
Driver: 2.4.36.0 (Microsoft supplied driver, because Intel driver refused to install on clean install of R2 SP1)
Advanced: Flow Control : Rx & TX Enabled
Advanced: Interrupt Moderation : Enabled
Advanced: Interrupt Moderation Rate : Adaptive
Advanced: IPv4 Checksum Offload: Rx & TX Enabled
Advanced: Jumbo Packet: 9014 Bytes
Advanced: Large Send Offload (IPv4) : Enabled
Advanced: Large Send Offload (IPv6) : Enabled
Advanced: Maximum number of RSS Processors : 16
Advanced: Preferred NUMA node : System Default
Advanced: Priority & VLAN : Priority & VLAN enabled
Advanced: Receive Side Scaling : Enabled
Advanced: Receive Side Scaling Queues : 8 Queues
Advanced: Starting RSS CPU : 0
Advanced: TCP Checksum Offload(IPv4) : Rx & TX Enabled
Advanced: TCP Checksum Offload(IPv6) : Rx & TX Enabled
Advanced: Transmit Buffers: 512 (maybe I should increase)
Advanced: UDP Checksum Offload(IPv4):Rx & TX Enabled
Advanced: UDP Checksum Offload(IPv6):Rx & TX Enabled
Advanced: Virtual Machine Queues: Disabled (I will be enabling them though)

The other sever is using ProSet 14.5.1.0 - I really need to upgrade it, but waiting for a driver that will install OK on SP1.

Hope this helps,

Aitor

Re: Starwind iSCSI vs Microsoft NFS

Posted: Tue Apr 12, 2011 1:24 pm
by Bohdan (staff)
Hi Aitor,

Yes, the NICs are PRO/1000 PT.
Our 10GB cards are in use now, these were just quick tests.
Official Intel suggestions are available on the following links:
http://www.intel.com/support/network/sb/cs-025829.htm
http://www.intel.com/support/network/ad ... ?wapkw=ALL(10+Gbe+options