Active/Backup bond not working
RafS
32 Posts
May 10, 2018, 9:34 pmQuote from RafS on May 10, 2018, 9:34 pmHi,
I created a setup with with bond type 1 for my backend network. Problem is that when the mail adaptor fail's the second one doesn't seem to take over. when looking at the kern.logs it seems it doesn't catch the link failure.
Regards
Raf Schandevyl
Hi,
I created a setup with with bond type 1 for my backend network. Problem is that when the mail adaptor fail's the second one doesn't seem to take over. when looking at the kern.logs it seems it doesn't catch the link failure.
Regards
Raf Schandevyl
admin
2,930 Posts
May 13, 2018, 3:55 pmQuote from admin on May 13, 2018, 3:55 pm
We just tested it and it is working here.
can you check the mode is set to active-backup
cat /sys/class/net/BOND_NAME/bonding/ mode
Are you using nics of different types, sometimes this causes issues in bonds.
We just tested it and it is working here.
can you check the mode is set to active-backup
cat /sys/class/net/BOND_NAME/bonding/ mode
Are you using nics of different types, sometimes this causes issues in bonds.
Last edited on May 13, 2018, 3:56 pm by admin · #2
RafS
32 Posts
May 14, 2018, 5:46 amQuote from RafS on May 14, 2018, 5:46 amHi i'm using 2 identical 10G nics. (Intel x552)
Output :
cat: /sys/class/net/BOND_NAME/bonding/mode: No such file or directory
root@ps-node-01:~# cat /sys/class/net/bBACKEND/bonding/mode
active-backup 1
Regards
Raf
Hi i'm using 2 identical 10G nics. (Intel x552)
Output :
cat: /sys/class/net/BOND_NAME/bonding/mode: No such file or directory
root@ps-node-01:~# cat /sys/class/net/bBACKEND/bonding/mode
active-backup 1
Regards
Raf
admin
2,930 Posts
May 14, 2018, 11:07 amQuote from admin on May 14, 2018, 11:07 amCan you check the backend networks were mapped to the bond not the nics. also if you can set the bond manually does it work ?
Can you check the backend networks were mapped to the bond not the nics. also if you can set the bond manually does it work ?
RafS
32 Posts
May 14, 2018, 2:09 pmQuote from RafS on May 14, 2018, 2:09 pmHow can i check if the backend networks is mapped to the bond and not the nic. I the configuration interface i selected the bond.
I have not tested with creating the bond manually.
Regards
Raf
How can i check if the backend networks is mapped to the bond and not the nic. I the configuration interface i selected the bond.
I have not tested with creating the bond manually.
Regards
Raf
admin
2,930 Posts
May 15, 2018, 3:42 pmQuote from admin on May 15, 2018, 3:42 pmyou can check your ips are mapped to bonds via
ip addr
can you also run the following to see the nics assigned to bonds:
cat /sys/class/net/BOND_NAME/bonding/slaves
cat /proc/net/bonding/BOND_NAME
The later should show you which nic is active in an active-backup bond
you can check your ips are mapped to bonds via
ip addr
can you also run the following to see the nics assigned to bonds:
cat /sys/class/net/BOND_NAME/bonding/slaves
cat /proc/net/bonding/BOND_NAME
The later should show you which nic is active in an active-backup bond
Last edited on May 15, 2018, 4:03 pm by admin · #6
RafS
32 Posts
May 15, 2018, 4:42 pmQuote from RafS on May 15, 2018, 4:42 pmwhen reading the output the ip addresses are on the bond. Everything seems ok, but when i disconnect the cable from the active slave, the otherone does not take over. (Node goes down)
Regards
Raf
root@ps-node-01:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bMANAGEMENT state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bMANAGEMENT state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-01 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1a brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-01 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-02 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-02 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1d brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bBACKEND state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bBACKEND state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
11: bMANAGEMENT: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.150/24 scope global bMANAGEMENT
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe67:118/64 scope link
valid_lft forever preferred_lft forever
12: bISCSI-01: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1a brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe67:11a/64 scope link
valid_lft forever preferred_lft forever
13: bISCSI-02: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1c brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe67:11c/64 scope link
valid_lft forever preferred_lft forever
14: bBACKEND: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
inet 192.168.160.150/24 scope global bBACKEND
valid_lft forever preferred_lft forever
inet 192.168.161.150/24 scope global bBACKEND
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe67:65e/64 scope link
valid_lft forever preferred_lft forever
root@ps-node-01:~#
root@ps-node-01:~# cat /sys/class/net/bBACKEND/bonding/slaves
eth6 eth7
root@ps-node-01:~# cat /proc/net/bonding/bBACKEND
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth6 (primary_reselect always)
Currently Active Slave: eth6
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth6
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: ac:1f:6b:67:06:5e
Slave queue ID: 0
Slave Interface: eth7
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 3
Permanent HW addr: ac:1f:6b:67:06:5f
Slave queue ID: 0
root@ps-node-01:~#
when reading the output the ip addresses are on the bond. Everything seems ok, but when i disconnect the cable from the active slave, the otherone does not take over. (Node goes down)
Regards
Raf
root@ps-node-01:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bMANAGEMENT state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bMANAGEMENT state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-01 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1a brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-01 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-02 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-02 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1d brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bBACKEND state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bBACKEND state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
11: bMANAGEMENT: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.150/24 scope global bMANAGEMENT
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe67:118/64 scope link
valid_lft forever preferred_lft forever
12: bISCSI-01: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1a brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe67:11a/64 scope link
valid_lft forever preferred_lft forever
13: bISCSI-02: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1c brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe67:11c/64 scope link
valid_lft forever preferred_lft forever
14: bBACKEND: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
inet 192.168.160.150/24 scope global bBACKEND
valid_lft forever preferred_lft forever
inet 192.168.161.150/24 scope global bBACKEND
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe67:65e/64 scope link
valid_lft forever preferred_lft forever
root@ps-node-01:~#
root@ps-node-01:~# cat /sys/class/net/bBACKEND/bonding/slaves
eth6 eth7
root@ps-node-01:~# cat /proc/net/bonding/bBACKEND
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth6 (primary_reselect always)
Currently Active Slave: eth6
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth6
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: ac:1f:6b:67:06:5e
Slave queue ID: 0
Slave Interface: eth7
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 3
Permanent HW addr: ac:1f:6b:67:06:5f
Slave queue ID: 0
root@ps-node-01:~#
admin
2,930 Posts
May 15, 2018, 10:01 pmQuote from admin on May 15, 2018, 10:01 pmTurn off fencing from the maintenance page so the node does not get killed, this will make it easier to debug. Then if you unplug the eth6 cable, use the prev commands to see if eth7 gets active or not. I would recommend you also try to manually set the bonding and see if it is different. i would also check your switches and see if there is a settings that need to be done.
Turn off fencing from the maintenance page so the node does not get killed, this will make it easier to debug. Then if you unplug the eth6 cable, use the prev commands to see if eth7 gets active or not. I would recommend you also try to manually set the bonding and see if it is different. i would also check your switches and see if there is a settings that need to be done.
Active/Backup bond not working
RafS
32 Posts
Quote from RafS on May 10, 2018, 9:34 pmHi,
I created a setup with with bond type 1 for my backend network. Problem is that when the mail adaptor fail's the second one doesn't seem to take over. when looking at the kern.logs it seems it doesn't catch the link failure.
Regards
Raf Schandevyl
Hi,
I created a setup with with bond type 1 for my backend network. Problem is that when the mail adaptor fail's the second one doesn't seem to take over. when looking at the kern.logs it seems it doesn't catch the link failure.
Regards
Raf Schandevyl
admin
2,930 Posts
Quote from admin on May 13, 2018, 3:55 pm
We just tested it and it is working here.
can you check the mode is set to active-backup
cat /sys/class/net/BOND_NAME/bonding/
mode Are you using nics of different types, sometimes this causes issues in bonds.
We just tested it and it is working here.
can you check the mode is set to active-backup
cat /sys/class/net/BOND_NAME/bonding/
mode
Are you using nics of different types, sometimes this causes issues in bonds.
RafS
32 Posts
Quote from RafS on May 14, 2018, 5:46 amHi i'm using 2 identical 10G nics. (Intel x552)
Output :
cat: /sys/class/net/BOND_NAME/bonding/mode: No such file or directory
root@ps-node-01:~# cat /sys/class/net/bBACKEND/bonding/mode
active-backup 1Regards
Raf
Hi i'm using 2 identical 10G nics. (Intel x552)
Output :
cat: /sys/class/net/BOND_NAME/bonding/mode: No such file or directory
root@ps-node-01:~# cat /sys/class/net/bBACKEND/bonding/mode
active-backup 1
Regards
Raf
admin
2,930 Posts
Quote from admin on May 14, 2018, 11:07 amCan you check the backend networks were mapped to the bond not the nics. also if you can set the bond manually does it work ?
Can you check the backend networks were mapped to the bond not the nics. also if you can set the bond manually does it work ?
RafS
32 Posts
Quote from RafS on May 14, 2018, 2:09 pmHow can i check if the backend networks is mapped to the bond and not the nic. I the configuration interface i selected the bond.
I have not tested with creating the bond manually.
Regards
Raf
How can i check if the backend networks is mapped to the bond and not the nic. I the configuration interface i selected the bond.
I have not tested with creating the bond manually.
Regards
Raf
admin
2,930 Posts
Quote from admin on May 15, 2018, 3:42 pmyou can check your ips are mapped to bonds via
ip addr
can you also run the following to see the nics assigned to bonds:
cat /sys/class/net/BOND_NAME/bonding/slaves
cat /proc/net/bonding/BOND_NAMEThe later should show you which nic is active in an active-backup bond
you can check your ips are mapped to bonds via
ip addr
can you also run the following to see the nics assigned to bonds:
cat /sys/class/net/BOND_NAME/bonding/slaves
cat /proc/net/bonding/BOND_NAME
The later should show you which nic is active in an active-backup bond
RafS
32 Posts
Quote from RafS on May 15, 2018, 4:42 pmwhen reading the output the ip addresses are on the bond. Everything seems ok, but when i disconnect the cable from the active slave, the otherone does not take over. (Node goes down)
Regards
Raf
root@ps-node-01:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bMANAGEMENT state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bMANAGEMENT state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-01 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1a brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-01 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-02 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-02 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1d brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bBACKEND state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bBACKEND state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
11: bMANAGEMENT: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.150/24 scope global bMANAGEMENT
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe67:118/64 scope link
valid_lft forever preferred_lft forever
12: bISCSI-01: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1a brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe67:11a/64 scope link
valid_lft forever preferred_lft forever
13: bISCSI-02: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1c brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe67:11c/64 scope link
valid_lft forever preferred_lft forever
14: bBACKEND: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
inet 192.168.160.150/24 scope global bBACKEND
valid_lft forever preferred_lft forever
inet 192.168.161.150/24 scope global bBACKEND
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe67:65e/64 scope link
valid_lft forever preferred_lft forever
root@ps-node-01:~#
root@ps-node-01:~# cat /sys/class/net/bBACKEND/bonding/slaves
eth6 eth7
root@ps-node-01:~# cat /proc/net/bonding/bBACKEND
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth6 (primary_reselect always)
Currently Active Slave: eth6
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0Slave Interface: eth6
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: ac:1f:6b:67:06:5e
Slave queue ID: 0Slave Interface: eth7
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 3
Permanent HW addr: ac:1f:6b:67:06:5f
Slave queue ID: 0
root@ps-node-01:~#
when reading the output the ip addresses are on the bond. Everything seems ok, but when i disconnect the cable from the active slave, the otherone does not take over. (Node goes down)
Regards
Raf
root@ps-node-01:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bMANAGEMENT state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bMANAGEMENT state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-01 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1a brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-01 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-02 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bISCSI-02 state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1d brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bBACKEND state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bBACKEND state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
11: bMANAGEMENT: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.150/24 scope global bMANAGEMENT
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe67:118/64 scope link
valid_lft forever preferred_lft forever
12: bISCSI-01: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1a brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe67:11a/64 scope link
valid_lft forever preferred_lft forever
13: bISCSI-02: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:01:1c brd ff:ff:ff:ff:ff:ff
inet6 fe80::ae1f:6bff:fe67:11c/64 scope link
valid_lft forever preferred_lft forever
14: bBACKEND: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether ac:1f:6b:67:06:5e brd ff:ff:ff:ff:ff:ff
inet 192.168.160.150/24 scope global bBACKEND
valid_lft forever preferred_lft forever
inet 192.168.161.150/24 scope global bBACKEND
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe67:65e/64 scope link
valid_lft forever preferred_lft forever
root@ps-node-01:~#
root@ps-node-01:~# cat /sys/class/net/bBACKEND/bonding/slaves
eth6 eth7
root@ps-node-01:~# cat /proc/net/bonding/bBACKEND
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth6 (primary_reselect always)
Currently Active Slave: eth6
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth6
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: ac:1f:6b:67:06:5e
Slave queue ID: 0
Slave Interface: eth7
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 3
Permanent HW addr: ac:1f:6b:67:06:5f
Slave queue ID: 0
root@ps-node-01:~#
admin
2,930 Posts
Quote from admin on May 15, 2018, 10:01 pmTurn off fencing from the maintenance page so the node does not get killed, this will make it easier to debug. Then if you unplug the eth6 cable, use the prev commands to see if eth7 gets active or not. I would recommend you also try to manually set the bonding and see if it is different. i would also check your switches and see if there is a settings that need to be done.
Turn off fencing from the maintenance page so the node does not get killed, this will make it easier to debug. Then if you unplug the eth6 cable, use the prev commands to see if eth7 gets active or not. I would recommend you also try to manually set the bonding and see if it is different. i would also check your switches and see if there is a settings that need to be done.