you need 2 bridge interfaces:
xenbr0 = eth0
xenbr1 = eth1
create a new script with the following lines: (ex: multiplebridge.sh) in /etc/xen/scripts/
#!/bin/sh
dir=$(dirname «$0»)
«$dir/network-bridge» «$@» vifnum=0 netdev=eth0 bridge=xenbr0
«$dir/network-bridge» «$@» vifnum=1 netdev=eth1 bridge=xenbr1
make the script executable (chmod +x /etc/xen/scripts/multiplebridge.sh)
modify /etc/xen/xend-config.sxp
change the line:
(network-script network-bridge)
to
(network-script multiplebridge.sh)
modify your virtual machine to use the new bridge interface:
ex:
vif = [ ‘bridge=xenbr1’, ]
Hope this helps.
another help
Using multiple network cards in XEN 3.0
Posted by itsec on Tue 5 Dec 2006 at 11:04
Xen is great. But installing more than one network card became a pain when I tried it the first time. There are some documents describing the principle but I was unable to find a real life example somewhere else. So this is a summary about how it works here now.
Using a bridge for a Dom is generally a good idea but then all packets traversing the bridge can be intercepted by any Dom that is using the same bridge. Having a single network card in a Xen landscape also means that theoretically each Dom would be able to sniff all packets traversing this single network card including packets to and from other Doms. A solution is to have more than one network card attached to Xen using a single network card for a single dom.
The scenario described here has a server with 3 network cards installed. The first card should be used to access Dom0 and some other DomNs while the second and third network card should be used to purely access Dom1 rsp. Dom2. The Dom configuration file just needs to select the appropriate bridge for each dom.
Topology:
eth0 - xenbr0 - Dom0, DomN eth1 - xenbr1 - Dom1 (cannot be sniffed by Dom0, DomN or Dom2) eth2 - xenbr2 - Dom2 (cannot be sniffed by Dom0, Dom1 or Domn)
The benefit of using bridging is that no manual routing configuration is required as all routes are dealt with by Xen itself.
/etc/xen/xend-config.sxp:
... #(network-script network-bridge) (network-script my-network-script) ...
Change the networking to have more than a single bridge. Here we set up a new script that will start a bridge for each NIC installed:
/etc/xen/scripts/my-network-script:
#!/bin/sh dir=$(dirname "$0") "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=xenbr0 "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=xenbr1 "$dir/network-bridge" "$@" vifnum=2 netdev=eth2 bridge=xenbr2Do not forget to chmod u+x this script!
And finally this is how each DomU can be configured:
/etc/xen/anyXmDomain.cfg:
Change IP and MAC as YOU need it!
... # use eth0 for this DomU vif = ['ip=10.XX.XX.230,mac=00:16:de:ad:fa:ce,bridge=xenbr0'] ...or
... # use eth1 for Dom1 vif = ['ip=10.XX.XX.234,mac=00:16:de:ad:be:ef,bridge=xenbr1'] ...or
... # use eth2 for Dom2 vif = ['ip=10.XX.XX.238,mac=00:16:be:ef:fa:ce,bridge=xenbr2'] ...
As said, there is no additional routing required in Dom0 or in DomU besides just normal routing as you would do with a single network card attached to Xen.
From DomU perspective nothing changes. Each DomU will automatically use the bridge defined in the configuration file. The only change in behavior you will notice is that the LEDs of the second and third NIC start blinking as soon as Dom1 rsp. Dom2 send or receive packets. You can even pull out the cable from the first NIC (eth0) while Dom1 (eth1) and Dom2 (eth2) continue working normally.
Dom0 routing:
# netstat -arn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.XX.XX.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.XX.XX.254 0.0.0.0 UG 0 0 0 eth0
The script above will create these bridges automatically for you so there is no need to manually change anything in the bridging settings.
Dom0 bridging:
# brctl show bridge name bridge id STP enabled interfaces xenbr0 8000.feffffffffff no peth0 vif0.0 vif1.0 vif3.0 vif4.0 xenbr1 8000.feffffffffff no peth1 vif0.1 vif6.0 xenbr2 8000.feffffffffff no peth2 vif0.2 vif7.0
Each DomU can be used as usual. The DomU itself is not even aware that it is using another Xen bridge. From DomUs point of view there is a (virtual) NIC that will be used as eth0.
Dom1/Dom2 eth0 configuration: (HWaddr is Dom1)
# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:16:DE:ED:BE:EF inet addr:10.XX.XX.234 Bcast:10.XX.XX.255 Mask:255.255.255.0 inet6 addr: fe80::216:daff:feda:ba5e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:257357 errors:0 dropped:0 overruns:0 frame:0 TX packets:238053 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32954128 (31.4 MiB) TX bytes:51239288 (48.8 MiB)
There is nothing special about DomU routing. As it does not know about the Xen bridge it routes normally to the gateway which is 10.XX.XX.254 in this example.
Dom1/Dom2 routing:
# netstat -arn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.XX.XX.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.XX.XX.254 0.0.0.0 UG 0 0 0 eth0
Well, that is mainly IT. Easy! Starting any DomU now will use the appropriate interface. So each interface takes the full benefit from individual bridging. From my point of view this is a much better approach than to control the pci interface directly from DomX which would also be possible.
If I forgot to mention something or you have corrections please give me a friendly hint.
Cheers and have fun,
Torsten