[PATCH] dumpcap: fix mbuf pool ring type
David Marchand
david.marchand at redhat.com
Mon Oct 2 09:33:50 CEST 2023
On Fri, Aug 4, 2023 at 6:16 PM Stephen Hemminger
<stephen at networkplumber.org> wrote:
>
> The ring used to store mbufs needs to be multiple producer,
> multiple consumer because multiple queues might on multiple
> cores might be allocating and same time (consume) and in
> case of ring full, the mbufs will be returned (multiple producer).
I think I get the idea, but can you rephrase please?
>
> Bugzilla ID: 1271
> Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
This Fixes: tag looks wrong.
> Signed-off-by: Stephen Hemminger <stephen at networkplumber.org>
> ---
> app/dumpcap/main.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
> index 64294bbfb3e6..991174e95022 100644
> --- a/app/dumpcap/main.c
> +++ b/app/dumpcap/main.c
> @@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
> data_size = mbuf_size;
> }
>
> - mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
> - MBUF_POOL_CACHE_SIZE, 0,
> - data_size,
> - rte_socket_id(), "ring_mp_sc");
> + mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
> + MBUF_POOL_CACHE_SIZE, 0,
> + data_size, rte_socket_id());
Switching to rte_pktmbuf_pool_create() still leaves the user with the
possibility to shoot himself in the foot (I was thinking of setting
some --mbuf-pool-ops-name EAL option).
This application has explicit requirements in terms of concurrent
access (and I don't think the mempool library exposes per driver
capabilities in that regard).
The application was enforcing the use of mempool/ring so far.
I think it is safer to go with an explicit
rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
WDYT?
> if (mp == NULL)
> rte_exit(EXIT_FAILURE,
> "Mempool (%s) creation failed: %s\n", pool_name,
> --
> 2.39.2
>
Thanks.
--
David Marchand
More information about the dev
mailing list