we have 2 full length M910 blade servers sitting in dell blade enclosure. Installed esxi 5.0 on both blades and joined them to cluster.
Each full length blade server has 8 nics. 2 dual port on board NIC and 2 dual port Ethernet mezzanine card. All are connected to internal cisco switch 3130 installed on i/o module A1, A2 ,B1 and B2. all internal switches are stacked together by network team. and there is a connection from internal switch (uplink) to external switch (ports) which are on vlan 137
All ports that are connected to esxi host are configured as trunk on internal physical cisco blade switches by network team. in our case total 16 ports( 8 nics x 2 servers) are set to trunk on internal cisco switch and there is uplink from internal cisco switch to our external switch( which is on vlan 137)
On esxi5.0 we have configured one big flat switch assigning all physical nics to Vswitch 0.
Please refer screen shot for port groups configured.
To isolate vmotion traffic, we have configured different vlan tag(150) for vmotion. but vmotion is not working. unable to ping vmotion ips from each other. but if i change Vlan tag to 137. vmkping works from each other and vmotion working.
if i change vlan tag to other than 137 to any port group ( such as management or virtual machine) , i am loosing connection to corresponding port group.
i believe missing to configure something on internal cisco blade switches (3130). please advise on what needs to be configured. i kind of know why trunking is required. if you could explain exact purpose of why trunking required for esx would be great.
what is best practice to configure virtual switch , like one big flat switch or multiple switches
assigning port group to each switch. what is recommended configuration to achieve increased inbound and outbound load balancing and fail over. detailed explanation would be really helpful for non-networking admins