|SUCCESS| [v2] net/intel: optimize for fast-free hint
dpdklab at iol.unh.edu
dpdklab at iol.unh.edu
Thu Apr 9 03:05:55 CEST 2026
Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/162944
_AI Code Review_
=== Patch Review: series.patch (via Claude) ===
Review date: 2026-04-09
Reviewing commit for DPDK coding standards and correctness:
git sha: 20260408132515.1314728-1-bruce.richardson at intel.com
Author: Bruce Richardson <bruce.richardson at intel.com>
Subject: [PATCH v2] net/intel: optimize for fast-free hint
This patch optimizes the Intel net driver to take advantage of the
RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE hint by resetting mbuf fields during
descriptor write and returning mbufs directly to the mempool without
accessing them during cleanup.
> diff --git a/drivers/net/intel/common/tx.h b/drivers/net/intel/common/tx.h
> index 283bd58d5d..f2123f069c 100644
> --- a/drivers/net/intel/common/tx.h
> +++ b/drivers/net/intel/common/tx.h
> @@ -363,13 +363,22 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx)
> return;
>
> if (!txq->use_vec_entry) {
> - /* Regular scalar path uses sw_ring with ci_tx_entry */
> - for (uint16_t i = 0; i < txq->nb_tx_desc; i++) {
> - if (txq->sw_ring[i].mbuf != NULL) {
> - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
> - txq->sw_ring[i].mbuf = NULL;
> - }
> + /* Free mbufs from (last_desc_cleaned + 1) to (tx_tail - 1). */
> + const uint16_t start = (txq->last_desc_cleaned + 1) % txq->nb_tx_desc;
> + const uint16_t nb_desc = txq->nb_tx_desc;
> + const uint16_t end = txq->tx_tail;
Does this code leak mbufs? The original code freed all descriptors in
sw_ring[] from 0 to nb_tx_desc. The new code only frees from
last_desc_cleaned+1 to tx_tail. If there are any mbufs in descriptors
outside this range (perhaps from an aborted transmit or partial ring
state), does ci_txq_release_all_mbufs() leak them?
[ ... ]
> diff --git a/drivers/net/intel/common/tx_scalar.h b/drivers/net/intel/common/tx_scalar.h
> index 9fcd2e4733..adbc4bafee 100644
> --- a/drivers/net/intel/common/tx_scalar.h
> +++ b/drivers/net/intel/common/tx_scalar.h
> @@ -197,16 +197,63 @@ ci_tx_xmit_cleanup(struct ci_tx_queue *txq)
> const uint16_t rs_idx = (last_desc_cleaned == nb_tx_desc - 1) ?
> 0 :
> (last_desc_cleaned + 1) >> txq->log2_rs_thresh;
> - uint16_t desc_to_clean_to = (rs_idx << txq->log2_rs_thresh) + (txq->tx_rs_thresh - 1);
> + const uint16_t dd_idx = txq->rs_last_id[rs_idx];
> + const uint16_t first_to_clean = rs_idx << txq->log2_rs_thresh;
>
> - /* Check if descriptor is done */
> - if ((txd[txq->rs_last_id[rs_idx]].cmd_type_offset_bsz &
> - rte_cpu_to_le_64(CI_TXD_QW1_DTYPE_M)) !=
> - rte_cpu_to_le_64(CI_TX_DESC_DTYPE_DESC_DONE))
> + /* Check if descriptor is done - all drivers use 0xF as done value in bits 3:0 */
> + if ((txd[dd_idx].cmd_type_offset_bsz & rte_cpu_to_le_64(CI_TXD_QW1_DTYPE_M)) !=
> + rte_cpu_to_le_64(CI_TX_DESC_DTYPE_DESC_DONE))
> + /* Descriptor not yet processed by hardware */
> return -1;
>
> + /* DD bit is set, descriptors are done. Now free the mbufs. */
> + /* Note: nb_tx_desc is guaranteed to be a multiple of tx_rs_thresh,
> + * validated during queue setup. This means cleanup never wraps around
> + * the ring within a single burst (e.g., ring=256, rs_thresh=32 gives
> + * bursts of 0-31, 32-63, ..., 224-255).
> + */
> + const uint16_t nb_to_clean = txq->tx_rs_thresh;
> + struct ci_tx_entry *sw_ring = txq->sw_ring;
> +
> + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
> + /* FAST_FREE path: mbufs are already reset, just return to pool */
> + void *free[CI_TX_MAX_FREE_BUF_SZ];
> + uint16_t nb_free = 0;
> +
> + /* Get cached mempool pointer, or cache it on first use */
> + struct rte_mempool *mp =
> + likely(txq->fast_free_mp != (void *)UINTPTR_MAX) ?
> + txq->fast_free_mp :
> + (txq->fast_free_mp = sw_ring[dd_idx].mbuf->pool);
Does this code dereference a NULL pointer? In ci_xmit_pkts() below, for
multi-segment packets with TSO, txe->mbuf can be set to NULL (line 549).
Can sw_ring[dd_idx].mbuf be NULL here, causing ->pool access to fault?
> +
> + /* Pack non-NULL mbufs in-place at start of sw_ring range.
> + * No modulo needed in loop since we're guaranteed not to wrap.
> + */
> + for (uint16_t i = 0; i < nb_to_clean; i++) {
> + struct rte_mbuf *m = sw_ring[first_to_clean + i].mbuf;
> + if (m == NULL)
> + continue;
This check suggests NULL mbufs are expected, contradicting the code
above that assumes sw_ring[dd_idx].mbuf is non-NULL for the mempool
cache initialization.
> + free[nb_free++] = m;
> + if (unlikely(nb_free == CI_TX_MAX_FREE_BUF_SZ)) {
> + rte_mempool_put_bulk(mp, free, nb_free);
> + nb_free = 0;
> + }
> + }
> +
> + /* Bulk return to mempool using packed sw_ring entries directly */
> + if (nb_free > 0)
> + rte_mempool_put_bulk(mp, free, nb_free);
> + } else {
> + /* Non-FAST_FREE path: use prefree_seg for refcount checks */
> + for (uint16_t i = 0; i < nb_to_clean; i++) {
> + struct rte_mbuf *m = sw_ring[first_to_clean + i].mbuf;
> + if (m != NULL)
> + rte_pktmbuf_free_seg(m);
> + }
> + }
> +
> /* Update the txq to reflect the last descriptor that was cleaned */
> - txq->last_desc_cleaned = desc_to_clean_to;
> + txq->last_desc_cleaned = first_to_clean + txq->tx_rs_thresh - 1;
> txq->nb_tx_free += txq->tx_rs_thresh;
>
> return 0;
[ ... ]
> @@ -507,10 +546,21 @@ ci_xmit_pkts(struct ci_tx_queue *txq,
> txd = &ci_tx_ring[tx_id];
> txn = &sw_ring[txe->next_id];
>
> - if (txe->mbuf)
> - rte_pktmbuf_free_seg(txe->mbuf);
> txe->mbuf = m_seg;
>
> + /* For FAST_FREE: reset mbuf fields while we have it in cache.
> + * FAST_FREE guarantees refcnt=1 and direct mbufs, so we only
> + * need to reset nb_segs and next pointer as per rte_pktmbuf_prefree_seg.
> + * Save next pointer before resetting since we need it for loop iteration.
> + */
> + struct rte_mbuf *next_seg = m_seg->next;
> + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
> + if (m_seg->nb_segs != 1)
> + m_seg->nb_segs = 1;
Does this code violate the coding style guide? The guide forbids testing
booleans against TRUE/FALSE, and requires zero/non-zero comparisons to
be explicit. Is "if (m_seg->nb_segs != 1)" an implicit comparison that
should be "if (m_seg->nb_segs > 1)"? The intent seems to be a boolean
check rather than exact value comparison.
> + if (next_seg != NULL)
> + m_seg->next = NULL;
> + }
> +
> /* Setup TX Descriptor */
> /* Calculate segment length, using IPsec callback if provided */
> if (ipsec_ops != NULL)
More information about the test-report
mailing list