I recently moved two working esxi 4.1 servers from one machine room and rack to another machine room and rack. When I tried to start up the machines it appeared that the networking was broken.
I have two HP dl380G6 (server A&B), that were fully loded with virtual hosts, with three vlans attached ( Management/Vmotion, vlan1, and vlan2, and fabrick storage ) These servers were placed in maintance mode and the virtual hosts were vmotioned off to two other hosts (Server C&D) that had joind the cluster and were placed in the destination rack and floor space. Once all of the virtual hosts were off, the system was shut down and moved. These two shutdown systems were then moved and were placed in the same rack as its companions, where all four servers were connected to the switch and the appropriate vlans.
When we went to power on Server A&B, it appeared that we were unable to get to the network, even though we could see link light communication from the NIC. We eventually confirmed that the switch was configured correctly by taking the server interface from server C and swapping it with server A. Server C was able to ping the gateway and was still under management, while server A continued to not communicate.
I need ideas as to how to proceed.
In the Solaris environment, one would normally need to plumb an interface, or even to flush routing tables. I have not figured out how to do this in ESXi.