NIC Bonding Configuration on RHEL5

Sarath Pillai's picture

Network card bonding is an effective way to increase available bandwidth if done carefully. Before going any further i will tell you some basics regarding NIC bonding.

 

One main thing which must be taken into consideration is that, Both the server and the switch to which server is attached must be configured to support bonding.

 

otherwise you will see that the switch is not sending any packets or say half of the packets are lost.

 

In networking bonding of NIC card is also called as Link aggregation.

 

Link Aggregation: Link aggregation is a computer networking term to describe various methods of combining (aggregating) multiple network connections in parallel to increase throughput beyond what a single connection could sustain, and to provide redundancy in case one of the links fails. 

 

Let me tell you one thing that most of the switches available today are compatible with this technology so dont worry.

 

If Linux is configured for 802.3ad link aggregation, the switch must also be told about this. In the Cisco world, this is called an EtherChannel. Once the switch knows those two ports are actually supposed to use 802.3ad, it will load balance the traffic destined for your attached server.

 

802.3ad mode is an IEEE standard also called LACP (Link Aggregation Control Protocol). It includes automatic configuration of the aggregates, so minimal configuration of the switch is needed

 

Linux allows binding of multiple network interfaces into a single channel/NIC using special kernel module called bonding

 

so the first step to check is whether your machine support bonding? there must be bonding module loaded into the kernal. This can be checked by the following method.

[root@myvm1 ~]# lsmod | grep bonding

if the above command doesnt return you anything then that kernal bonding module is not loaded so you need to load that module by the below command.

[root@myvm1 ~]# modprobe bonding
[root@myvm1 ~]# lsmod | grep bonding
bonding                80813  0
[root@myvm1 ~]#

 

Now lets create a bond0(dont worry thats just an intergace name) interface config file.Red had enterprise linux or Rhel stores the nic config files in the below location.

 

/etc/sysconfig/network-scripts/.

[root@myvm1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0

Now add the below contents inside.Dont worry i will explain the meaning of all of them.

 

DEVICE=bond0
IPADDR=192.168.1.20
NETWORK=192.168.1.0
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes

 

You need to replace IP address with your actual setup ofcoursecheeky

 

Now you just need to tell both the nic cards that we are slave to that bond0 device.

 

open the file /etc/sysconfig/network-scripts/ifcfg-eth0 and enter the following.

 

DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

 

Now in /etc/sysconfig/network-scripts/ifcfg-eth1 file enter the following.

 

DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

 

notice the fact that the only difference between the two files is the device name.

 

You need to Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. so we will add the module name and its alias in the /etc/modprobe.conf file so it gets loaded at boot time.

 

alias bond0 bonding
options bond0 mode=balance-alb miimon=100

 

now lets understand both the above mentioned options.

 

mode=balence-alb means Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

 

this is the main option for load balencing with nic bonding.

 

miimon=100 Specifies the MII link monitoring frequency in milliseconds.This determines how often the link state of each slave is
inspected for link failures.  A value of zero disables MII link monitoring.  A value of 100 is a good starting point.
The use_carrier option, below, affects how the link state is determined.The default value is 0.

 

which means that without that option link failover is not enabled.

 

 

Now we just need to simply restart the network, or say bring up the bond0 interface.

 

/etc/init.d/network restart

 

thats it you have enabled nic bonding with failover and load balencing.

 

You can get information about the current bonding options and properties from the below file:

 

# cat /proc/net/bonding/bond0

 

Hope you guys enjoyed this post.

 

Rate this article: 
Average: 3.5 (11 votes)

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.