I have a rack of dell 2950's that are all cooperating using ESXI5.1 connected to an ISCSI lun on another 2950 running FreeNAS with 6x2TB 7200RPM Seagates. I just added another 2950 with a LSI 9240 that breaks out to a SGI/Rackable SE3016 loaded with 16x73GB 15K SAS drives.
Drive/Network performance metrics:
Sample DD command:
Current Production NAS (ISCSI/SATA):
Iperf: 920 Mb/s Avg on 4 threads
DD 20 gig self-copy (unavailable due to device extent on iscsi share...no "internal" storage)--current peak drive in/out is 72/74MB/s, not representative of actual "max" but I am unable to spin down everything at this time so instead serves as an average rather than max capacity on the drives themselves.
New Nas
Iperf: 920 Mb/s Avg on 4 threads
DD 20 gig self-copy: 20971520000 bytes transferred in 36.391064 secs (576282135 bytes/sec)
DD from old device to mounted NFS share hosted from new device: 2097152000 bytes transferred in 30.661172 secs (68397647 bytes/sec)
Any operation within Vsphere results in similar speeds to this test DD from a VM hosted on the SAS drives: 20971520000 bytes (21 GB) copied, 2765.61 s, 7.6 MB/s
It appears that "something" is going wrong wherein operations on this single datastore are extremely slow but only when communicating with ESXI/vcenter. It doesnt appear to be an issue with protocols or hardware limitations/problems as outside VMware everything behaves properly. I have a case open with VMware tech support but wanted to also pose this scenario with the community and see if anyone else has experienced similar problems. If anyone has any ideas as to why this may be happening I'd be interested in trying out some scanerios!