|SUCCESS| [v2,2/2] examples: use default mbuf burst size

dpdklab at iol.unh.edu dpdklab at iol.unh.edu
Fri Apr 17 02:53:32 CEST 2026


Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/163163

_AI Code Review_

=== Patch Review: series.patch (via Claude) ===
Review date: 2026-04-17

commit: 2026-04-16 (git sha not provided)
Author: Pavan Nikhilesh <pbhagavatula at marvell.com>
Subject: [PATCH v2 1/2] config: add mbuf burst size configuration

This patch introduces configurable mbuf burst size macros to allow
platform-specific optimization for throughput or latency.

> diff --git a/config/arm/meson.build b/config/arm/meson.build
> index 5a9c16b9b16c..3b1b5664f0b2 100644
> --- a/config/arm/meson.build
> +++ b/config/arm/meson.build
> @@ -481,6 +481,7 @@ soc_cn10k = {
>          ['RTE_MAX_LCORE', 24],
>          ['RTE_MAX_NUMA_NODES', 1],
>          ['RTE_MEMPOOL_ALIGN', 128],
> +        ['RTE_MBUF_BURST_SIZE_THROUGHPUT', 64],
>      ],
>      'part_number': '0xd49',
>      'extra_march_features': ['crypto'],

[ ... ]

> diff --git a/config/meson.build b/config/meson.build
> index 9ba7b9a3384f..eba2b9ae366e 100644
> --- a/config/meson.build
> +++ b/config/meson.build
> @@ -399,10 +399,25 @@ if get_option('mbuf_refcnt_atomic')
>  endif
>  dpdk_conf.set10('RTE_IOVA_IN_MBUF', get_option('enable_iova_as_pa'))
> 
> +# Recommended mbuf burst sizes for generic applications.
> +# Platform-specific configs may override these values.
> +# RTE_MBUF_BURST_SIZE_THROUGHPUT: Burst size optimized for throughput.
> +dpdk_conf.set('RTE_MBUF_BURST_SIZE_THROUGHPUT', 32)
> +# RTE_MBUF_BURST_SIZE_LATENCY: Burst size optimized for low latency.
> +dpdk_conf.set('RTE_MBUF_BURST_SIZE_LATENCY', 4)
> +
>  compile_time_cpuflags = []
>  subdir(arch_subdir)
>  dpdk_conf.set('RTE_COMPILE_TIME_CPUFLAGS', ','.join(compile_time_cpuflags))
> 
> +# RTE_MBUF_BURST_SIZE_DEFAULT: Default burst size used by examples and testpmd.
> +# Controlled by -Dmbuf_burst_size_default option (throughput or latency).
> +if get_option('mbuf_burst_size_default') == 'latency'
> +    dpdk_conf.set('RTE_MBUF_BURST_SIZE_DEFAULT', 'RTE_MBUF_BURST_SIZE_LATENCY')
> +else
> +    dpdk_conf.set('RTE_MBUF_BURST_SIZE_DEFAULT', 'RTE_MBUF_BURST_SIZE_THROUGHPUT')
> +endif

Does dpdk_conf.set() properly expand the macro reference here? The code
sets RTE_MBUF_BURST_SIZE_DEFAULT to the string
'RTE_MBUF_BURST_SIZE_LATENCY' or 'RTE_MBUF_BURST_SIZE_THROUGHPUT' rather
than to their numeric values. Can this result in RTE_MBUF_BURST_SIZE_DEFAULT
being defined as a string rather than resolving to 4 or 32?

[ ... ]

> diff --git a/meson_options.txt b/meson_options.txt
> index e28d24054cf1..2caf0be91d39 100644
> --- a/meson_options.txt
> +++ b/meson_options.txt
> @@ -46,6 +46,8 @@ option('enable_iova_as_pa', type: 'boolean', value: true, description:
>         'Support the use of physical addresses for IO addresses, such as used by UIO or VFIO in no-IOMMU mode. When disabled, DPDK can only run with IOMMU support for address mappings, but will have more space available in the mbuf structure.')
>  option('mbuf_refcnt_atomic', type: 'boolean', value: true, description:
>         'Atomically access the mbuf refcnt.')
> +option('mbuf_burst_size_default', type: 'combo', choices: ['throughput', 'latency'], value: 'throughput', description:
> +       'Default mbuf burst size profile: throughput-optimized or latency-optimized.')

[ ... ]


commit: 2026-04-16 (git sha not provided)
Author: Pavan Nikhilesh <pbhagavatula at marvell.com>
Subject: [PATCH v2 2/2] examples: use default mbuf burst size

This patch replaces hardcoded burst sizes with
RTE_MBUF_BURST_SIZE_DEFAULT throughout examples.

> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 9b60ebd7fc1a..fa6349391dd0 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -78,7 +78,11 @@ struct cmdline_file_info {
>  #define TX_DESC_MAX    2048
>  
>  #define MAX_PKT_BURST 512
> -#define DEF_PKT_BURST 32
> +#if RTE_MBUF_BURST_SIZE_DEFAULT > MAX_PKT_BURST
> +#define DEF_PKT_BURST MAX_PKT_BURST
> +#else
> +#define DEF_PKT_BURST RTE_MBUF_BURST_SIZE_DEFAULT
> +#endif

Given the possible issue in patch 1/2 where RTE_MBUF_BURST_SIZE_DEFAULT
might expand to a string like 'RTE_MBUF_BURST_SIZE_THROUGHPUT', does this
preprocessor comparison work correctly? Can this comparison evaluate
properly if RTE_MBUF_BURST_SIZE_DEFAULT is not directly defined as a
numeric constant?

[ ... ]

> diff --git a/examples/distributor/main.c b/examples/distributor/main.c
> index ea44939fba04..d60de85a369f 100644
> --- a/examples/distributor/main.c
> +++ b/examples/distributor/main.c
> @@ -23,10 +23,10 @@
>  #define TX_RING_SIZE 1024
>  #define NUM_MBUFS ((64*1024)-1)
>  #define MBUF_CACHE_SIZE 128
> -#define BURST_SIZE 64
> +#define BURST_SIZE	 RTE_MBUF_BURST_SIZE_DEFAULT
>  #define SCHED_RX_RING_SZ 8192
>  #define SCHED_TX_RING_SZ 65536
> -#define BURST_SIZE_TX 32
> +#define BURST_SIZE_TX	 RTE_MBUF_BURST_SIZE_DEFAULT

Does this code handle the case where BURST_SIZE was previously 64 but
RTE_MBUF_BURST_SIZE_DEFAULT defaults to 32? Can this reduce performance
or cause issues when RX_RING_SIZE and TX_RING_SIZE are sized for larger
bursts?

[ ... ]

> diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h
> index b4ef4b6d04bc..191b60c8e2ee 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.h
> +++ b/examples/ipsec-secgw/ipsec-secgw.h
> @@ -11,8 +11,8 @@
>  
>  #define NB_SOCKETS 4
>  
> -#define MAX_PKT_BURST 32
> -#define MAX_PKT_BURST_VEC 256
> +#define MAX_PKT_BURST	  RTE_MBUF_BURST_SIZE_DEFAULT
> +#define MAX_PKT_BURST_VEC RTE_MBUF_BURST_SIZE_DEFAULT

Does this change preserve the intended behavior where MAX_PKT_BURST_VEC
was 8x larger than MAX_PKT_BURST? Setting both to
RTE_MBUF_BURST_SIZE_DEFAULT changes the ratio from 1:8 to 1:1. Does the
MAX_PKTS calculation which uses max(MAX_PKT_BURST_VEC, MAX_PKT_BURST)
still work correctly?

[ ... ]

> diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
> index 471e3b488fe6..358640f1adb5 100644
> --- a/examples/l3fwd/l3fwd.h
> +++ b/examples/l3fwd/l3fwd.h
> @@ -23,14 +23,18 @@
>  #define RX_DESC_DEFAULT 1024
>  #define TX_DESC_DEFAULT 1024
>  
> -#define DEFAULT_PKT_BURST 32
>  #define MAX_PKT_BURST 512
> +#if RTE_MBUF_BURST_SIZE_DEFAULT > MAX_PKT_BURST
> +#define DEFAULT_PKT_BURST MAX_PKT_BURST
> +#else
> +#define DEFAULT_PKT_BURST RTE_MBUF_BURST_SIZE_DEFAULT
> +#endif
>  #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
>  
>  #define MEMPOOL_CACHE_SIZE RTE_MEMPOOL_CACHE_MAX_SIZE
>  #define MAX_RX_QUEUE_PER_LCORE 16
>  
> -#define VECTOR_SIZE_DEFAULT   MAX_PKT_BURST
> +#define VECTOR_SIZE_DEFAULT   RTE_MBUF_BURST_SIZE_DEFAULT

Does changing VECTOR_SIZE_DEFAULT from MAX_PKT_BURST (512) to
RTE_MBUF_BURST_SIZE_DEFAULT (likely 32 or 64) affect the eventdev vector
mode performance? Was VECTOR_SIZE_DEFAULT intentionally set to
MAX_PKT_BURST for a reason?

[ ... ]

> diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
> index da1b0b228787..911b7de266f5 100644
> --- a/examples/qos_meter/main.c
> +++ b/examples/qos_meter/main.c
> @@ -76,13 +76,13 @@ static struct rte_eth_conf port_conf = {
>   * Packet RX/TX
>   *
>   ***/
> -#define RTE_MBUF_F_RX_BURST_MAX                32
> -#define RTE_MBUF_F_TX_BURST_MAX                32
> +#define APP_RX_BURST_MAX		RTE_MBUF_BURST_SIZE_DEFAULT
> +#define APP_TX_BURST_MAX		RTE_MBUF_BURST_SIZE_DEFAULT

The old defines used RTE_MBUF_F_ prefix which might conflict with DPDK
public API namespace. Does renaming these to APP_ prefix properly avoid
potential conflicts?

[ ... ]

> diff --git a/examples/qos_sched/main.h b/examples/qos_sched/main.h
> index ea66df0434fb..238f6c5ed272 100644
> --- a/examples/qos_sched/main.h
> +++ b/examples/qos_sched/main.h
> @@ -24,10 +24,10 @@ extern "C" {
>  #define APP_RING_SIZE (8*1024)
>  #define NB_MBUF   (2*1024*1024)
>  
> -#define MAX_PKT_RX_BURST 64
> -#define PKT_ENQUEUE 64
> -#define PKT_DEQUEUE 63
> -#define MAX_PKT_TX_BURST 64
> +#define MAX_PKT_RX_BURST RTE_MBUF_BURST_SIZE_DEFAULT
> +#define PKT_ENQUEUE      RTE_MBUF_BURST_SIZE_DEFAULT
> +#define PKT_DEQUEUE      (RTE_MBUF_BURST_SIZE_DEFAULT - 1)
> +#define MAX_PKT_TX_BURST RTE_MBUF_BURST_SIZE_DEFAULT

Does PKT_DEQUEUE need to remain exactly one less than PKT_ENQUEUE? If
RTE_MBUF_BURST_SIZE_DEFAULT changes, does the expression
(RTE_MBUF_BURST_SIZE_DEFAULT - 1) properly maintain the intended
relationship between enqueue and dequeue sizes?


More information about the test-report mailing list