DavidMcKnight wrote:I’m primarily doing workstation virtualization on VMware’s VSphere. As my nonscientific benchmark, I time how long it takes some of my larger apps to load. In one case when I have the VM on a Starwind ISCSI target it takes 58 seconds to load. When I have a clone of that VM sitting on an NFS share it takes 17 seconds to load. This generally holds true for all my apps. Now the iSCSI target and the NFS share are hosted on the same Server 2008 R2. Both VMs are the only VMs on those particular datastores. The traffic is going out on the same Intel 10gig CX4 connection which is connected to the same Cisco WS-C3560E switch. Both VMs are hosted on the same VMware Host. Both VMs are stored on the same Areca 1880 RAID card at the same RAID level, but on different volumes.
I guess the point I’m trying to make is that only thing that could be causing this big of a difference in performance is iSCSI in general or some specific iSCSI settings. I’ve scoured the net looking for optimal 10Gig iSCSI settings, and have tried most of them. As it stands now I’m getting dramatic performance difference between iSCSI and NFS. Should I be?
If I’m using Starwind and VMware and Intel 10Gigbit is there a check list of setting and their values I should be using to get optimal performance from iSCSI in general and/or Starwind specifically.
BTW, yes I have configured my server to use the “Recommended TCP/IP settings” found in this forum.
Head Digest: Prohibited
Data Digest: Prohibited
@ziz (staff) wrote:Since you are noticing the difference of performance when using "Disk Bridge" you should first check the performance of your network link between your StarWind server and your ESX server, if you will get the full bandwidth your link can give you can go further and test RAM device, create a RAM device and connect it to a VM on your ESX and test performance using such tools like IOmeter or ATTO...etc.
Please, refer to our benchmarking servers document to learn more about different tests you should run to check your system's performance. Follow the link to download the document: http://www.starwindsoftware.com/benchma ... vers-guide
Please notice that if you will use windows VM for testing it should be have the TCP/IP tweaks applied too.
DavidMcKnight wrote:Understood. I will run the tests as soon as I can and post the results.
In the mean time, I have a dumb question and I can't find the answer anywhere on Starwind's site.
I own a license of Starwind and I'm still within my 1 year maintenance agreement. What web page do I go to to see what the current public version/release is of Starwind (So I can easily tell if I am running the latest release)? What page do I go to to download the current build of Starwind? Yes, I can download the demo, but is there not I place I can log into and download it from?
DavidMcKnight wrote:In the interest of full disclosure I think I found the "Ya but" problem with Microsoft NFS. Microsoft's NFS has terrible write speeds with vSphere. Even though I get 4 times the read performance, I think I get 1/10th the write speeds. According to my research this is because when vSphere writes to an NFS shares, it tells the datastore not to acknowledge the write request until the data is completely written to the disk drives. Writing to the datastore's cache isn't good enough, and there doesn't seem to be any way around it when using the NFS services included in Server 2008 R2. But, supposedly if I were to switch to a real linux solution I could put some settings in to fix the NFS write issues.
Setting that aside the question that I still have is why is NFS's read speeds so much better than iSCSI's? I think I'll just file a support request and see if I can get someone to look over my setup and hope they can see what my problem is.
Users browsing this forum: AdsBot [Google] and 1 guest