|SUCCESS| [v2,2/2] examples: use default mbuf burst size

dpdklab at iol.unh.edu dpdklab at iol.unh.edu
Fri Apr 17 00:11:23 CEST 2026


Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/163163

_AI Code Review_

=== Patch Review: series.patch (via Claude) ===
Review date: 2026-04-16

I reviewed the patch series that introduces configurable mbuf burst sizes to DPDK. Below is my analysis:

git sha: 20260416211159.44785-1-pbhagavatula at marvell.com (patch 1/2)
git sha: 20260416211159.44785-2-pbhagavatula at marvell.com (patch 2/2)
Author: Pavan Nikhilesh <pbhagavatula at marvell.com>
Subject: [PATCH v2 1/2] config: add mbuf burst size configuration
Subject: [PATCH v2 2/2] examples: use default mbuf burst size

This patch series adds configurable mbuf burst size macros to optimize
for either throughput (default 32) or latency (default 4), and updates
all example applications to use these defaults instead of hardcoded
values.

> diff --git a/config/meson.build b/config/meson.build
> index 9ba7b9a3384f..eba2b9ae366e 100644
> --- a/config/meson.build
> +++ b/config/meson.build
> @@ -399,10 +399,25 @@ if get_option('mbuf_refcnt_atomic')
>  endif
>  dpdk_conf.set10('RTE_IOVA_IN_MBUF', get_option('enable_iova_as_pa'))
> 
> +# Recommended mbuf burst sizes for generic applications.
> +# Platform-specific configs may override these values.
> +# RTE_MBUF_BURST_SIZE_THROUGHPUT: Burst size optimized for throughput.
> +dpdk_conf.set('RTE_MBUF_BURST_SIZE_THROUGHPUT', 32)
> +# RTE_MBUF_BURST_SIZE_LATENCY: Burst size optimized for low latency.
> +dpdk_conf.set('RTE_MBUF_BURST_SIZE_LATENCY', 4)
> +
>  compile_time_cpuflags = []
>  subdir(arch_subdir)
>  dpdk_conf.set('RTE_COMPILE_TIME_CPUFLAGS', ','.join(compile_time_cpuflags))
> 
> +# RTE_MBUF_BURST_SIZE_DEFAULT: Default burst size used by examples and testpmd.
> +# Controlled by -Dmbuf_burst_size_default option (throughput or latency).
> +if get_option('mbuf_burst_size_default') == 'latency'
> +    dpdk_conf.set('RTE_MBUF_BURST_SIZE_DEFAULT', 'RTE_MBUF_BURST_SIZE_LATENCY')
> +else
> +    dpdk_conf.set('RTE_MBUF_BURST_SIZE_DEFAULT', 'RTE_MBUF_BURST_SIZE_THROUGHPUT')
> +endif

This code sets RTE_MBUF_BURST_SIZE_DEFAULT to a string like
'RTE_MBUF_BURST_SIZE_LATENCY' which will be used in C preprocessor
contexts. Does dpdk_conf.set() properly handle this such that the
resulting C header will read "#define RTE_MBUF_BURST_SIZE_DEFAULT
RTE_MBUF_BURST_SIZE_LATENCY" without quotes? If it adds quotes around
the string, this will break compilation.

[ ... ]

> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 9b60ebd7fc1a..fa6349391dd0 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -78,7 +78,11 @@ struct cmdline_file_info {
>  #define TX_DESC_MAX    2048
>  
>  #define MAX_PKT_BURST 512
> -#define DEF_PKT_BURST 32
> +#if RTE_MBUF_BURST_SIZE_DEFAULT > MAX_PKT_BURST
> +#define DEF_PKT_BURST MAX_PKT_BURST
> +#else
> +#define DEF_PKT_BURST RTE_MBUF_BURST_SIZE_DEFAULT
> +#endif

Does this preprocessor comparison work correctly if
RTE_MBUF_BURST_SIZE_DEFAULT expands to another macro name like
RTE_MBUF_BURST_SIZE_THROUGHPUT? The #if directive requires a constant
expression that evaluates to an integer.

> diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
> index 471e3b488fe6..358640f1adb5 100644
> --- a/examples/l3fwd/l3fwd.h
> +++ b/examples/l3fwd/l3fwd.h
> @@ -23,14 +23,18 @@
>  #define RX_DESC_DEFAULT 1024
>  #define TX_DESC_DEFAULT 1024
>  
> -#define DEFAULT_PKT_BURST 32
>  #define MAX_PKT_BURST 512
> +#if RTE_MBUF_BURST_SIZE_DEFAULT > MAX_PKT_BURST
> +#define DEFAULT_PKT_BURST MAX_PKT_BURST
> +#else
> +#define DEFAULT_PKT_BURST RTE_MBUF_BURST_SIZE_DEFAULT
> +#endif
>  #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
>  
>  #define MEMPOOL_CACHE_SIZE RTE_MEMPOOL_CACHE_MAX_SIZE
>  #define MAX_RX_QUEUE_PER_LCORE 16
>  
> -#define VECTOR_SIZE_DEFAULT   MAX_PKT_BURST
> +#define VECTOR_SIZE_DEFAULT   RTE_MBUF_BURST_SIZE_DEFAULT

Does changing VECTOR_SIZE_DEFAULT from MAX_PKT_BURST (512) to
RTE_MBUF_BURST_SIZE_DEFAULT (32 or 64 depending on platform) break
vectorized packet processing that might expect larger batches? What is
the impact on applications using vector mode?

> diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
> index aa33e71d7aa5..ab773f25369e 100644
> --- a/examples/link_status_interrupt/main.c
> +++ b/examples/link_status_interrupt/main.c
> @@ -38,7 +38,7 @@
>  
>  #define NB_MBUF   8192
>  
> -#define MAX_PKT_BURST 32
> +#define MAX_PKT_BURST	  RTE_MBUF_BURST_SIZE_DEFAULT
>  #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
>  
>  /*
> @@ -60,8 +60,6 @@ static unsigned int lsi_rx_queue_per_lcore = 1;
>  /* destination port for L2 forwarding */
>  static unsigned lsi_dst_ports[RTE_MAX_ETHPORTS] = {0};
>  
> -#define MAX_PKT_BURST 32
> -
>  #define MAX_RX_QUEUE_PER_LCORE 16
>  #define MAX_TX_QUEUE_PER_PORT 16

This correctly removes a duplicate definition of MAX_PKT_BURST. However,
does this file have any static array declarations that use
MAX_PKT_BURST as a size? If RTE_MBUF_BURST_SIZE_DEFAULT evaluates to a
non-constant expression, this could cause compilation failures.

> diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
> index da1b0b228787..911b7de266f5 100644
> --- a/examples/qos_meter/main.c
> +++ b/examples/qos_meter/main.c
> @@ -76,13 +76,13 @@ static struct rte_eth_conf port_conf = {
>   * Packet RX/TX
>   *
>   ***/
> -#define RTE_MBUF_F_RX_BURST_MAX                32
> -#define RTE_MBUF_F_TX_BURST_MAX                32
> +#define APP_RX_BURST_MAX		RTE_MBUF_BURST_SIZE_DEFAULT
> +#define APP_TX_BURST_MAX		RTE_MBUF_BURST_SIZE_DEFAULT

The original macro names RTE_MBUF_F_RX_BURST_MAX and
RTE_MBUF_F_TX_BURST_MAX use the RTE_ prefix which suggests they might
be part of the public API or shared across files. Does renaming them to
APP_* break any references outside this file?

> @@ -187,7 +187,7 @@ main_loop(__rte_unused void *dummy)
>  		}
>  
>  		/* Read packet burst from NIC RX */
> -		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, RTE_MBUF_F_RX_BURST_MAX);
> +		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, APP_RX_BURST_MAX);
>  
>  		/* Handle packets */
>  		for (i = 0; i < nb_rx; i ++) {

Does this code properly handle the case where APP_RX_BURST_MAX is now
potentially platform-dependent (64 on CN10K vs 32 elsewhere)? Is the
pkts_rx array declaration sized appropriately to handle the larger
burst size?


More information about the test-report mailing list