|SUCCESS| net/intel: introduce a dedicated idpf complq struct
dpdklab at iol.unh.edu
dpdklab at iol.unh.edu
Mon Apr 6 13:56:13 CEST 2026
Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/162856
_AI Code Review_
=== Patch Review: series.patch (via Claude) ===
Review date: 2026-04-06
Reviewing commit for DPDK coding standards compliance and correctness.
commit: [from patch series]
Author: Shaiq Wani <shaiq.wani at intel.com>
net/intel: introduce a dedicated idpf complq struct
The IDPF split-queue completion queue was using ci_tx_queue, a structure
designed for TX descriptor queues, wasting ~96 bytes per completion
queue. Additionally, the CQ-only fields (compl_ring, txqs, tx_start_qid,
expected_gen_id) bloated ci_tx_queue for every other Intel driver that
shares it.
Introduce struct idpf_complq with exactly the fields needed by the
completion queue. This brings the CQ allocation down from ~150 bytes
to ~48 bytes.
Brief Summary:
This patch extracts completion queue specific fields from the generic
ci_tx_queue structure into a new dedicated idpf_complq structure to
reduce memory footprint and improve code organization.
> diff --git a/drivers/net/intel/common/tx.h b/drivers/net/intel/common/tx.h
> index 283bd58d5d..9da9366046 100644
> --- a/drivers/net/intel/common/tx.h
> +++ b/drivers/net/intel/common/tx.h
> @@ -114,6 +114,7 @@ struct ci_tx_desc {
>
> /* forward declaration of the common intel (ci) queue structure */
> struct ci_tx_queue;
> +struct idpf_complq;
[ ... ]
> @@ -209,18 +210,12 @@ struct ci_tx_queue {
> uint8_t vf_ctx_initialized; /**< VF context descriptors initialized */
> };
> struct { /* idpf specific values */
> - volatile union {
> - struct idpf_flex_tx_sched_desc *desc_ring;
> - struct idpf_splitq_tx_compl_desc *compl_ring;
> - };
> - struct ci_tx_queue *complq;
> - void **txqs; /*only valid for split queue mode*/
> - uint32_t tx_start_qid;
> + struct idpf_flex_tx_sched_desc *desc_ring;
> + struct idpf_complq *complq;
Does this code need comment updates here? The removal of txqs,
tx_start_qid, and expected_gen_id fields occurs without explanatory
comments indicating they've moved to struct idpf_complq. The existing
comment "only valid for split queue mode" is also removed without
clarification.
[ ... ]
> diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.h b/drivers/net/intel/idpf/idpf_common_rxtx.h
> index f512700d5f..b2d33287df 100644
> --- a/drivers/net/intel/idpf/idpf_common_rxtx.h
> +++ b/drivers/net/intel/idpf/idpf_common_rxtx.h
> @@ -181,6 +181,20 @@ struct idpf_rxq_ops {
> void (*release_mbufs)(struct idpf_rx_queue *rxq);
> };
>
> +/* Dedicated completion queue structure for IDPF split queue model. */
> +struct idpf_complq {
> + volatile struct idpf_splitq_tx_compl_desc *compl_ring;
> + void **txqs;
> + rte_iova_t tx_ring_dma;
> + const struct rte_memzone *mz;
> + uint32_t tx_start_qid;
> + uint16_t nb_tx_desc;
> + uint16_t tx_tail;
> + uint16_t queue_id;
> + uint16_t port_id;
> + uint8_t expected_gen_id;
> +};
Does this structure need alignment attributes? The ci_tx_queue structure
it replaces likely has RTE_CACHE_LINE_SIZE alignment. Should struct
idpf_complq also be marked __rte_cache_aligned to avoid false sharing?
[ ... ]
> diff --git a/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c b/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c
> index db7728afad..3fb2efdb56 100644
> --- a/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c
> +++ b/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c
> @@ -781,9 +781,9 @@ idpf_dp_singleq_xmit_pkts_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
> }
>
> static __rte_always_inline void
> -idpf_splitq_scan_cq_ring(struct ci_tx_queue *cq)
> +idpf_splitq_scan_cq_ring(struct idpf_complq *cq)
> {
> - struct idpf_splitq_tx_compl_desc *compl_ring;
> + volatile struct idpf_splitq_tx_compl_desc *compl_ring;
Does this volatile qualifier change affect code generation? The
compl_ring is now read from cq->compl_ring which is already marked
volatile in struct idpf_complq. Does introducing an additional volatile
local pointer provide any benefit, or could this be non-volatile since
the source is already volatile?
The same question applies to the identical change in
idpf_common_rxtx_avx512.c at the corresponding location.
[ ... ]
> diff --git a/drivers/net/intel/idpf/idpf_rxtx.c b/drivers/net/intel/idpf/idpf_rxtx.c
> index b316c77b62..31005ce210 100644
> --- a/drivers/net/intel/idpf/idpf_rxtx.c
> +++ b/drivers/net/intel/idpf/idpf_rxtx.c
> @@ -360,7 +360,7 @@ idpf_tx_complq_setup(struct rte_eth_dev *dev, struct ci_tx_queue *txq,
> {
> struct idpf_vport *vport = dev->data->dev_private;
> const struct rte_memzone *mz;
> - struct ci_tx_queue *cq;
> + struct idpf_complq *cq;
> int ret;
>
> cq = rte_zmalloc_socket("idpf splitq cq",
Does this code now allocate the correct size? The rte_zmalloc_socket()
call appearing just below this context should now use sizeof(struct
idpf_complq) instead of sizeof(struct ci_tx_queue). Can you verify the
allocation size was updated to match the new structure?
The same concern applies to cpfl_rxtx.c where cpfl_tx_complq_setup() and
cpfl_tx_hairpin_queue_setup() allocate completion queues.
More information about the test-report
mailing list