We just plopped in our new storage which is running 10gb iscsi from all of our hosts.
The storage is all SSD in a RAID5 (10 drives)
I beleive somthing must be wrong because performance is really bad.
hosts:
hpdl360g7
Dual quad core cpu
144gb memory
OS VMware vSphere 4.1 u3
Vcenter running DRS cluster
Here is my situation and i am out of ideas:
I connected my hosts to my new storage using the SW iScsi Initator.
Added a 1TB datastore and built a single vm on it running windows server 2008r2.
THe vmdk for the C drive is provisioned at 80gb (not that it matters).
I loaded IOMeter on the vm and set it up as follows:
2 worker processes
4k
90% random
60% read / 40% write
If i kick that off......i get a whopping 900 iops and around 4MB/Sec.
That should be faster.
My question is.......is this the proper way to test with IOMeter installed inside of a vm like this?
Heck....once i got these numbers i then hooked up a windows machine (server 2008r2) and just connected it to the 1gb interface on the same storage unit.
I mounted it as a drive in windows and then ran IOmeter from a phyical windows server directly attached (no switch in between) and i got the exact same numbers or really close to them.
I know i know......you say...well what happens if you run 100% seq data? I tested that as well. I get about 1200 iops and about 20mb.
I think there is somthing wrong with our storage but want to make sure i covered my bases in testing.