[dpdk-dev] [PATCH] net/mlx5: fix minimum number of Multi-Packet RQ buffers

Shahaf Shuler shahafs at mellanox.com
Sun Aug 5 13:33:09 CEST 2018


Friday, August 3, 2018 12:00 AM, Yongseok Koh:
> Subject: [PATCH] net/mlx5: fix minimum number of Multi-Packet RQ buffers
> 
> If MPRQ is enabled, a PMD-private mempool is allocated. For ConnectX-4 Lx,
> the minimum number of strides is 512 which ConnectX-5 supports 8. This
> results in quite small number of elements for the MPRQ mempool. For
> example, if the size of Rx ring is configured as 512, only one MPRQ buffer can
> cover the whole ring. If there's only one Rx queue is configured. In the
> following code in mlx5_mprq_alloc_mp(), desc is 1 and obj_num will be
> 36 as a result.
> 
> 	desc *= 4;
> 	obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * priv->rxqs_n;
> 
> However, rte_mempool_create_empty() has a sanity check to refuse large
> per-lcore cache size compared to the number of elements. Cache flush
> threshold should not exceed the number of elements of a mempool. For the
> above example, the threshold is 32 * 1.5 = 48 which is larger than 36 and it
> fails to create the mempool.
> 
> Fixes: 7d6bf6b866b8 ("net/mlx5: add Multi-Packet Rx support")
> Cc: stable at dpdk.org
> 
> Signed-off-by: Yongseok Koh <yskoh at mellanox.com>

Applied to next-net-mlx, thanks.





More information about the dev mailing list