I am looking for some assistance with an implementation. I am attemping to configure 4 vNICs in vSphere 5 for iSCSI to connect to 4 iSCSI ports defined on a VNX 5300 for a high bandwidth configuration.
Attached is a diagram, but here is what I am planning on doing and I would like to know if anyone sees any issues with this implementation plan.
I have a VNX 5300 with 2 10GB ports per SP. I have defined 2 vLANs for iSCSI traffic and have configured the SP ports as follows:
SPA-0 with an address on vLAN 1
SPA-1 with an address on vLAN 2
SPB-0 with an address on vLAN 2
SPB-1 with an address on vLAN 1
I have 6 ESXi vSphere 5 hosts, each with 4 10GB vNICs dedicated for iSCSI use. On each ESXi 5 Host, I have defined 2 vSwitches for iSCSI, each with 2 vNICs. On vSwitch 1, vNIC 2 is configured for vLAN 1 and vNIC 5 is configured for vLAN 2. On vSwitch 2, vNIC 4 is configured for vLAN 1 and vNIC 3 is configured for vLAN 2.
vNIC 2 of vSwitch 1 and vNIC 4 of vSwitch 2 (both on the vLAN 1 network) are connected to Fabric Interconnect A (UCS) while vNIC 5 of vSwitch 1 and vNIC 3 of vSwitch 2 (both on the vLAN 2 network) are connected to Fabric Interconnect B (UCS).
On the UCS Fabric Interconnect A, I have configured 2 Applicance ports and connected VNX SPA-0 and VNX SPB-1 (both on the vLAN 1 network). On the UCS Fabric Interconnect B, I have configured 2 Applicance ports and connected VNX SPA-1 and VNX SPB-0 (both on the vLAN 2 network).
Given this configuration, would all 4 ports of the VNX and ESX Hosts be utilized for iSCSI I/O assuming that the ESX host would connect to multiple LUNs on the VNX which would be spread evenly between SPs?
What I do not want to happen is have iSCSI traffic from Fabric Interconnect A attempt to connect to SP ports only connected to Fabric Interconnect B by attempting to travel upstream and come back down to the other Fabric Interconnect. Given what networking gear is upstream this would cause a serious performance issue.
Anyone see any issues with this design? Thoughts and/or comments?