I'm still doing tests to see if iSCSI performance is acceptable. I've set up a test lab with the following hardware
Host A: IBM xSeries 330 Server
- dual P3 1.4
- 4G ECC RAM
- 2X18G 15K RPM SCSI
- Intel Pro 1000 MT Desktop Adapter (32bit/66mhz).
- OS: CentOS 3.3 (Red Hat Enterprise Linux 3 clone)
Host B:
- P4 3.0C HT
- 2G DDR 400
- 74G Raptor 10K RPM
- Intel Pro 1000 CT CSA
- OS: Windows XP SP1
Host C:
- P4 1.6C (no HT)
- 1G DDR 333
- 2x200G Maxtor ATA 8M cache
- Intel Pro 1000 MT Desktop (PCI Bus)
- Winidows 2K Server SP4
Switch: Dell PowerEdge 2616
- 16 port gigabit
- 48Gb fabric capacity
- No Jumbo Frame Support
Running Iperf, I 'm getting the following results
Host A - Host B
- 8K - 320Mb/s CPU: 7%
- 16K - 510Mb/s CPU: 12%
- 32K - 630Mb/s CPU: 15%
- 64K - 800Mb/s CPU: 22%
- 128K - 810Mb/s CPU: 22%
- 256K - 810Mb/s CPU: 27%
- 512K - 810Mb/s CPU: 33%
- Over 512K 810Mb/s CPU: 35%
Host B - Host C
- 8K - 320Mb/s CPU: 7%
- 16K - 510Mb/s CPU: 12%
- 32K - 630Mb/s CPU: 15%
- 64K - 810Mb/s CPU: 20%
- 128K - 830Mb/s CPU: 22%
- 256K - 860Mb/s CPU: 25%
- 512K - 860Mb/s CPU: 35%
- Over 512K 860Mb/s CPU: 37%
The Intel NICs have send and checksum offload, so the CPU time isnot too bad. Host C is running gigabit over regular PCI bus, so the result is very good. I'm waiting for the new board with Intel CSA NIC to be shipped.
I haven't completely tune up TCP stack, especially on Linux, no tune up has been performed at all.
We'll be running RAID 10 on the iSCSI target so disk performance should not be the issue.
I don't have access to a swith with Jumbo Frame suport at this time (I know I can directly connect two hosts but that is not the senario we are going to deploy), so what can you achieve without Jumbo Frame? Does Jumbo Frame really help in speed and CPU load?
Thanks
Chris
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software