[PATCH v4] net/bonding: another fix to LACP mempool size

Min Hu (Connor) humin29 at huawei.com
Wed Mar 9 01:32:12 CET 2022


Acked-by: Min Hu (Connor) <humin29 at huawei.com>

在 2022/3/8 22:24, Gaoxiang Liu 写道:
> The following log message may appear after a slave is idle(or nearly
> idle)
> for a few minutes:"PMD: Failed to allocate LACP packet from pool".
> And bond mode 4 negotiation may fail.
> 
> Problem:When bond mode 4 has been chosed and delicated queue has
> not been enable, all mbufs from a slave' private pool(used
> exclusively for transmitting LACPDUs) have been allocated in
> interrupt thread, and are still sitting in the device's tx
> descriptor ring and other cores' mempool caches in fwd thread.
> Thus the interrupt thread can not alloc LACP packet from pool.
> 
> Solution: Ensure that each slave'tx (LACPDU) mempool owns more than
> n-tx-queues * n-tx-descriptor + fwd_core_num *
> per-core-mmempool-flush-threshold mbufs.
> 
> Note that the LACP tx machine fuction is the only code that allocates
> from a slave's private pool. It runs in the context of the interrupt
> thread, and thus it has no mempool cache of its own.
> 
> Signed-off-by: Gaoxiang Liu <liugaoxiang at huawei.com>
> 
> ---
> v2:
> * Fixed compile issues.
> 
> v3:
> * delete duplicate code.
> 
> v4;
> * Fixed some issues.
> 1. total_tx_desc should use +=
> 2. add detailed logs
> ---
>   drivers/net/bonding/rte_eth_bond_8023ad.c | 8 +++++---
>   lib/mempool/rte_mempool.h                 | 2 ++
>   2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index ca50583d62..d2168073cc 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -1050,6 +1050,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
>   	uint32_t total_tx_desc;
>   	struct bond_tx_queue *bd_tx_q;
>   	uint16_t q_id;
> +	uint32_t cache_size;
>   
>   	/* Given slave mus not be in active list */
>   	RTE_ASSERT(find_slave_by_id(internals->active_slaves,
> @@ -1100,11 +1101,12 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
>   		total_tx_desc += bd_tx_q->nb_tx_desc;
>   	}
>   
> +	cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> +		32 : RTE_MEMPOOL_CACHE_MAX_SIZE;
> +	total_tx_desc += rte_lcore_count() * cache_size * RTE_MEMPOOL_CACHE_FLUSHTHRESH_MULTIPLIER;
>   	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
>   	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
> -		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> -			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> -		0, element_size, socket_id);
> +		cache_size, 0, element_size, socket_id);
>   
>   	/* Any memory allocation failure in initialization is critical because
>   	 * resources can't be free, so reinitialization is impossible. */
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 1e7a3c1527..fa15ed710f 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -56,6 +56,8 @@
>   extern "C" {
>   #endif
>   
> +#define RTE_MEMPOOL_CACHE_FLUSHTHRESH_MULTIPLIER 1.5
> +
>   #define RTE_MEMPOOL_HEADER_COOKIE1  0xbadbadbadadd2e55ULL /**< Header cookie. */
>   #define RTE_MEMPOOL_HEADER_COOKIE2  0xf2eef2eedadd2e55ULL /**< Header cookie. */
>   #define RTE_MEMPOOL_TRAILER_COOKIE  0xadd2e55badbadbadULL /**< Trailer cookie.*/
> 


More information about the dev mailing list