Hi
My networking team suggested the following config:
VLAN-Public: first nic of load balancer
VLAN-Front: second nic load balancer
VLAN-Front: first and second nic of vCloud Cell01 and the same for vCloud Cell02
VLAN-Back: vCenter / ESXi / vShieldMgr / SQL / etc.
Traffic / routing between VLAN-Front and VLAN-Back is possible.
Now I installed vCell01 with two nics, each one IP: 192.168.1.1 and 192.168.1.11.
vCell02 with two nics, each one IP: 192.168.1.2 and 192.168.1.12.
Installation was no problem, connecting to SQL went smooth.
But then I noticed that I couldn't reach 192.168.1.11 from the vCenter Server. Which is logical now that I think of it, since traffic routed from the vCenter to 192.168.1.11 will come in on eth1 but because of the default gateway will leave over eth0. And vCenter will probably not like that.
My questions:
- Can I solve this through routing changes in the routing table of the vCells?
- Should I make a complete change in my configuration / network design?
- Should I continue and not worry about vCenter being unable to reach 192.168.1.11.
Any tips welcome
PS: Using CentOS 6.5
Regards
Gabrie