What is Network bonding?
Network bonding is a method of combining (joining) two or more network interfaces together into a singleinterface. It will increase the network throughput, bandwidth and will give redundancy. If one interface is down or unplugged, the other one will keep the network traffic up and alive. Network bonding can be used in situations wherever you need redundancy, fault tolerance or load balancing networks.
Linux allows us to bond multiple network interfaces into single interface using a special kernel module namedbonding. The Linux bonding driver provides a method for combining multiple network interfaces into a single logical “bonded” interface. The behaviour of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring, may be performed.
Types of Network Bonding
- Round-robin policy: It the default mode. It transmits packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
- Active-backup policy: In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.
- XOR policy: Transmit based on [(source MAC address XOR’d with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
- Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
- IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
– Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
– A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.
- Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
– Ethtool support in the base drivers for retrieving the speed of each slave.
- Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the sourcehardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.
Setting up Network Bonding on Ubuntu 14.10
I tested this how-to on Ubuntu 14.10, and it worked well.
We need atleast two network cards. You are free to use n number of NICs.
I have three network interfaces, namely eth0, eth1 and eth2 in my Ubuntu 14.10 desktop. Let us combine two NICs (eth1 and eth2) and make them into one NIC named bond0.
Install Bonding Kernel Module
The following command should be performed with root user privileges.
First, we have to install bonding kernel module using the command:
apt-get install ifenslave-2.6
Now, we have to make sure that the correct kernel module bonding is present, and loaded at boot time.
Edit /etc/modules file,
Add “bonding” at the end.
# /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. # Parameters can be specified after the module name. lp rtc bonding
Now, stop networking service.
Warning: You should not enter the above command over SSH connection.
Then load the bonding kernel module:
sudo modprobe bonding
Configure Bond0 Interface
First, let us create a bond0 configuration file as shown below.
Go to the directory where Debian/Ubuntu stores the network configuration files. By default, Debian and its derivatives stores the network configuration files under /etc/network/ directory.
Create bond0 configuration file under the above mentioned directory.
Add the following lines marked in red color to create network bond for eth1 and eth2.
# interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback #eth1 configuration auto eth1 iface eth1 inet manual bond-master bond0 bond-primary eth1 #eth2 configuration auto eth2 iface eth2 inet manual bond-master bond0 # Bonding eth1 & eth2 to create bond0 NIC auto bond0 iface bond0 inet static address 192.168.1.200 gateway 192.168.1.1 netmask 255.255.255.0 bond-mode active-backup bond-miimon 100 bond-slaves none
Save and close file.
Note: Here we will be configuring active-backup mode. 192.168.1.200 is bond0 IP address.
Next Start/Restart network service to take effect the changes.
Bring up bond0:
Note: If you have any problems while bringing up bond0, restart and check again.
Test Network Bonding
Now enter the following command to check whether the bonding interface bond0 is up and running:
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: eth1 (primary_reselect always) Currently Active Slave: eth1 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 08:00:27:33:6e:fc Slave queue ID: 0 Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 08:00:27:7c:b8:02 Slave queue ID: 0
As you see in the above output, the bond0 interface is up and running and it is configured as active-backup(mode1) mode. In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails.
To view the list of network interfaces and their IP address, enter the following command:
bond0 Link encap:Ethernet HWaddr 08:00:27:33:6e:fc inet addr:192.168.1.200 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe33:6efc/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1341 errors:0 dropped:181 overruns:0 frame:0 TX packets:137 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:943994 (943.9 KB) TX bytes:10399 (10.3 KB) eth0 Link encap:Ethernet HWaddr 08:00:27:09:48:87 inet addr:192.168.1.107 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe09:4887/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957 errors:0 dropped:0 overruns:0 frame:0 TX packets:829 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:897369 (897.3 KB) TX bytes:184921 (184.9 KB) eth1 Link encap:Ethernet HWaddr 08:00:27:33:6e:fc UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:1143 errors:0 dropped:0 overruns:0 frame:0 TX packets:137 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:916683 (916.6 KB) TX bytes:10399 (10.3 KB) eth2 Link encap:Ethernet HWaddr 08:00:27:33:6e:fc UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:198 errors:0 dropped:181 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:27311 (27.3 KB) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:653 errors:0 dropped:0 overruns:0 frame:0 TX packets:653 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:56460 (56.4 KB) TX bytes:56460 (56.4 KB)
As per the above output, bond0 is configured as master; eth1 and eth2 are configured as a slave.