[dpdk-dev] [PATCH 1/2] net/bonding: fix a possible unbalance packet receiving

Ferruh Yigit ferruh.yigit at intel.com
Fri Oct 9 15:44:20 CEST 2020


On 9/22/2020 11:29 AM, Li RongQing wrote:
> Current Rx round robin policy for the slaves has two issue:
> 
> 1. active_slave in bond_dev_private is shared by multiple PMDS
> which maybe cause some slave Rx hungry, for example, there
> is two PMD and two slave port, both PMDs start to receive, and
> see that active_slave is 0, and receive from slave 0, after
> complete, they increase active_slave by one, totally active_slave
> are increased by two, next time, they will start to receive
> from slave 0 again, at last, slave 1 maybe drop packets during
> to not be polled by PMD
> 
> 2. active_slave is shared and written by multiple PMD in RX path
> for every time RX, this is a kind of cache false share, low
> performance.
> 
> so move active_slave from bond_dev_private to bond_rx_queue
> make it as per queue variable
> 
> Signed-off-by: Li RongQing <lirongqing at baidu.com>
> Signed-off-by: Dongsheng Rong <rongdongsheng at baidu.com>
 >

Fixes: ae2a04864a9a ("net/bonding: reduce slave starvation on Rx poll")
Cc: stable at dpdk.org

For series,
Reviewed-by: Wei Hu (Xavier) <xavier.huwei at huawei.com>

Series applied to dpdk-next-net/main, thanks.




More information about the dev mailing list