Bond fails to start all slaves
Aman Thakur
r.aman.t.435 at gmail.com
Fri Apr 4 16:08:23 CEST 2025
Hi users,
Environment
===========
DPKD Version : 21.11.9
Bond member
slave 1 : Intel e1000e I217 1Gbps "Ethernet Connection I217-LM 153a"
slave 2 : Intel e1000e 82574 1Gbps "82574L Gigabit Network Connection 10d3"
OS :Rocky Linux 8.10 rhel centos fedora
Compiler: gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-24)
Steps to reproduce
==================
1. bind ports to dpdk
dpdk-devbind.py -b vfio-pci 0000:b1:00.0 0000:b1:00.1
2. launch testpmd
./dpdk-testpmd -l 0-3 -n 4 -- -i --portmask=0x1 --nb-cores=2
--no-lsc-interrupt --port-topology=chained
3. create bonding device
port stop all
create bonded device 0 0
add bonding slave 0 2
add bonding slave 1 2
port start all
show port info all
Results:
========
the link status of 1 ports is down ( not specific to mode )
In every bond mode link speed of the slave interface 1 Intel e1000e I217
1Gbps "Ethernet Connection I217-LM 153a" is down.
Port 0 Link down
Port 1 Link up at 1 Gbps FDX Autoneg
Port 1: link state change event
Port 2 Link up at 1 Gbps FDX Autoneg
Done
Expected Result:
================
The status of all ports should be normal. Link status of both member/slave
should be up and status of port 0 should not be always down.
In mode 0 with bond mode 0 link speed should be 2 Gbps with 2 members each
1 1Gbps.
*My Questions:*
1. What could be causing one of the slaves to become inactive ?
2. Is there a specific configuration or step I might be missing that's
preventing the bond from utilizing both slaves ?
3. Are there any known compatibility issues or limitations with Intel
e1000e I217 1Gbps "Ethernet Connection I217-LM 153a" that could explain
this behavior?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/users/attachments/20250404/1fcc1518/attachment.htm>
More information about the users
mailing list