|SUCCESS| [v2] net/intel: optimize for fast-free hint

dpdklab at iol.unh.edu dpdklab at iol.unh.edu
Wed Apr 8 20:22:12 CEST 2026


Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/162944

_AI Code Review_

=== Patch Review: series.patch (via Claude) ===
Review date: 2026-04-08

Reviewing patch: net/intel: optimize for fast-free hint

git sha: 20260408132515.1314728-1-bruce.richardson at intel.com
Author: Bruce Richardson <bruce.richardson at intel.com>
Subject: [PATCH v2] net/intel: optimize for fast-free hint

This patch adds optimizations for the MBUF_FAST_FREE offload flag by
resetting mbuf fields during descriptor write and returning freed mbufs
directly to the mempool without accessing them during cleanup.

> diff --git a/drivers/net/intel/common/tx.h b/drivers/net/intel/common/tx.h
> index 283bd58d5d..f2123f069c 100644
> --- a/drivers/net/intel/common/tx.h
> +++ b/drivers/net/intel/common/tx.h

[ ... ]

> @@ -363,13 +363,22 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
>  		return;
>  
>  	if (!txq->use_vec_entry) {
> -		/* Regular scalar path uses sw_ring with ci_tx_entry */
> -		for (uint16_t i = 0; i < txq->nb_tx_desc; i++) {
> -			if (txq->sw_ring[i].mbuf != NULL) {
> -				rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
> -				txq->sw_ring[i].mbuf = NULL;
> -			}
> +		/* Free mbufs from (last_desc_cleaned + 1) to (tx_tail - 1). */
> +		const uint16_t start = (txq->last_desc_cleaned + 1) % txq->nb_tx_desc;
> +		const uint16_t nb_desc = txq->nb_tx_desc;
> +		const uint16_t end = txq->tx_tail;
> +
> +		uint16_t i = start;
> +		if (end < i) {
> +			for (; i < nb_desc; i++)
> +				if (txq->sw_ring[i].mbuf != NULL)
> +					rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
> +			i = 0;
>  		}
> +		for (; i < end; i++)
> +			if (txq->sw_ring[i].mbuf != NULL)
> +				rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
> +		memset(txq->sw_ring, 0, sizeof(txq->sw_ring[0]) * nb_desc);

Does this code correctly handle partial descriptor writes? If a queue
setup allocates nb_tx_desc entries but transmit has only used tx_tail
descriptors, what happens to uninitialized entries between tx_tail and
nb_tx_desc when memset() clears them all? Can this leak mbufs that were
left in descriptors beyond tx_tail from a previous transmit sequence?

[ ... ]

> diff --git a/drivers/net/intel/common/tx_scalar.h b/drivers/net/intel/common/tx_scalar.h
> index 9fcd2e4733..adbc4bafee 100644
> --- a/drivers/net/intel/common/tx_scalar.h
> +++ b/drivers/net/intel/common/tx_scalar.h

[ ... ]

> @@ -197,16 +197,63 @@ ci_tx_xmit_cleanup(struct ci_tx_queue *txq)
>  	const uint16_t rs_idx = (last_desc_cleaned == nb_tx_desc - 1) ?
>  			0 :
>  			(last_desc_cleaned + 1) >> txq->log2_rs_thresh;
> -	uint16_t desc_to_clean_to = (rs_idx << txq->log2_rs_thresh) + (txq->tx_rs_thresh - 1);
> +	const uint16_t dd_idx = txq->rs_last_id[rs_idx];
> +	const uint16_t first_to_clean = rs_idx << txq->log2_rs_thresh;
>  
> -	/* Check if descriptor is done  */
> -	if ((txd[txq->rs_last_id[rs_idx]].cmd_type_offset_bsz &
> -			rte_cpu_to_le_64(CI_TXD_QW1_DTYPE_M)) !=
> -				rte_cpu_to_le_64(CI_TX_DESC_DTYPE_DESC_DONE))
> +	/* Check if descriptor is done - all drivers use 0xF as done value in bits 3:0 */
> +	if ((txd[dd_idx].cmd_type_offset_bsz & rte_cpu_to_le_64(CI_TXD_QW1_DTYPE_M)) !=
> +			rte_cpu_to_le_64(CI_TX_DESC_DTYPE_DESC_DONE))
> +		/* Descriptor not yet processed by hardware */
>  		return -1;
>  
> +	/* DD bit is set, descriptors are done. Now free the mbufs. */
> +	/* Note: nb_tx_desc is guaranteed to be a multiple of tx_rs_thresh,
> +	 * validated during queue setup. This means cleanup never wraps around
> +	 * the ring within a single burst (e.g., ring=256, rs_thresh=32 gives
> +	 * bursts of 0-31, 32-63, ..., 224-255).
> +	 */
> +	const uint16_t nb_to_clean = txq->tx_rs_thresh;
> +	struct ci_tx_entry *sw_ring = txq->sw_ring;
> +
> +	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
> +		/* FAST_FREE path: mbufs are already reset, just return to pool */
> +		void *free[CI_TX_MAX_FREE_BUF_SZ];
> +		uint16_t nb_free = 0;
> +
> +		/* Get cached mempool pointer, or cache it on first use */
> +		struct rte_mempool *mp =
> +			likely(txq->fast_free_mp != (void *)UINTPTR_MAX) ?
> +			txq->fast_free_mp :
> +			(txq->fast_free_mp = sw_ring[dd_idx].mbuf->pool);

Does this code dereference sw_ring[dd_idx].mbuf without checking for
NULL? When FAST_FREE is enabled, can this descriptor slot ever have a
NULL mbuf pointer that would cause a segmentation fault here?

> +
> +		/* Pack non-NULL mbufs in-place at start of sw_ring range.
> +		 * No modulo needed in loop since we're guaranteed not to wrap.
> +		 */
> +		for (uint16_t i = 0; i < nb_to_clean; i++) {
> +			struct rte_mbuf *m = sw_ring[first_to_clean + i].mbuf;
> +			if (m == NULL)
> +				continue;
> +			free[nb_free++] = m;
> +			if (unlikely(nb_free == CI_TX_MAX_FREE_BUF_SZ)) {
> +				rte_mempool_put_bulk(mp, free, nb_free);
> +				nb_free = 0;
> +			}
> +		}
> +
> +		/* Bulk return to mempool using packed sw_ring entries directly */
> +		if (nb_free > 0)
> +			rte_mempool_put_bulk(mp, free, nb_free);

Does this code leave sw_ring entries with stale mbuf pointers after
freeing them to the mempool? Should the sw_ring slots be cleared to NULL
after rte_mempool_put_bulk() to prevent potential use-after-free if
these descriptors are accessed before being reused?

> +	} else {
> +		/* Non-FAST_FREE path: use prefree_seg for refcount checks */
> +		for (uint16_t i = 0; i < nb_to_clean; i++) {
> +			struct rte_mbuf *m = sw_ring[first_to_clean + i].mbuf;
> +			if (m != NULL)
> +				rte_pktmbuf_free_seg(m);
> +		}
> +	}

In the non-FAST_FREE path, does this code also need to clear sw_ring
entries to NULL after freeing, or can stale pointers remain and cause
issues?

[ ... ]

> @@ -507,10 +546,21 @@ ci_xmit_pkts(struct ci_tx_queue *txq,
>  			txd = &ci_tx_ring[tx_id];
>  			txn = &sw_ring[txe->next_id];
>  
> -			if (txe->mbuf)
> -				rte_pktmbuf_free_seg(txe->mbuf);
>  			txe->mbuf = m_seg;
>  
> +			/* For FAST_FREE: reset mbuf fields while we have it in cache.
> +			 * FAST_FREE guarantees refcnt=1 and direct mbufs, so we only
> +			 * need to reset nb_segs and next pointer as per rte_pktmbuf_prefree_seg.
> +			 * Save next pointer before resetting since we need it for loop iteration.
> +			 */
> +			struct rte_mbuf *next_seg = m_seg->next;
> +			if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
> +				if (m_seg->nb_segs != 1)
> +					m_seg->nb_segs = 1;
> +				if (next_seg != NULL)
> +					m_seg->next = NULL;
> +			}

Does this conditional resetting of nb_segs and next correctly handle all
mbuf states? Can the conditional checks (nb_segs != 1, next != NULL)
actually be true when FAST_FREE is enabled given that FAST_FREE
guarantees would already ensure nb_segs is 1 and next is NULL for each
segment?


More information about the test-report mailing list