Quantcast
Channel: VMware Communities : Discussion List - All Communities
Viewing all articles
Browse latest Browse all 193198

iSCSI with 2 NIC 10GbE Hosts (with vDS & LBT & NIOC)

$
0
0

I am trying to decide on a network/vSwitch design for 2 NIC 10GbE hosts. What are the pros/cons of using active/active vmnics versus active/inactive w/ vmkernel port binding to iSCSI initiator? I planned on using Load Based Teaming and Network I/O Control with active/active (see link below).

 

Are there pitfalls/advantages to using vDS port groups to handle NIC failures instead of the storage stack (port binding)? Do they have the same resiliency in other respects (switch failure, SAN controller failure)? Most information out there just assumes port binding is the best way to do it, without much explanation, and often only covers standard vSwitches.

 

So far I've went with option 2 here (with a second iSCSI PG/VMkernel):  http://blogs.vmware.com/networking/2011/12/vds-best-practices-rack-server-deployment-with-two-10-gigabit-adapters.html

 

Any advice is appreciated! I've had a hard time finding much concrete analysis/thoughts on this.

 

Environment:

  • vSphere 5.0 Ent Plus
  • 2 10GbE ports per host (Intel 82599EB, X520-DA2)
  • Compellent iSCSI SAN (2 10GbE ports per controller, 2 controllers, 2 fault domains, Round Robin)

 

VMkernel Interfaces/vDS Port Groups:

  • Management Network
  • vMotion Network
  • iSCSI 1
  • iSCSI 2

Viewing all articles
Browse latest Browse all 193198

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>