[PATCH 2/3] net/dpaa2: clear active VDQ state when freeing Rx queues
Hemant Agrawal
hemant.agrawal at nxp.com
Thu Nov 6 17:38:06 CET 2025
From: Maxime Leroy <maxime at leroys.fr>
When using the prefetch Rx path (dpaa2_dev_prefetch_rx), the driver keeps
track of one outstanding VDQCR command per DPIO portal in the global
rte_global_active_dqs_list[] array. Each queue_storage_info_t also stores
the active result buffer and portal index:
qs->active_dqs
qs->active_dpio_id
Before issuing a new pull command, dpaa2_dev_prefetch_rx() checks for an
active entry and spins on qbman_check_command_complete() until the
corresponding VDQCR completes.
On port close / hotplug remove, dpaa2_free_rx_tx_queues() frees all
per-lcore queue_storage_info_t structures and their dq_storage[] buffers,
but never clears the global rte_global_active_dqs_list[] entries. After a
detach/attach sequence (or "del/add" in grout), the prefetch Rx path
still sees an active entry for the portal and spins forever on a stale dq
buffer that has been freed and will never be completed by hardware. In
gdb, dq->dq.tok stays 0 and dpaa2_dev_prefetch_rx() loops in:
while (!qbman_check_command_complete(get_swp_active_dqs(idx)))
;
Fix this by clearing the active VDQ state before freeing queue storage.
For each Rx queue and lcore, if qs->active_dqs is non-NULL, call
clear_swp_active_dqs(qs->active_dpio_id) and set qs->active_dqs to NULL.
Then dpaa2_queue_storage_free() can safely free q_storage and
dq_storage[].
After this change, a DPNI detach/attach sequence no longer leaves stale
entries in rte_global_active_dqs_list[], and the prefetch Rx loop does
not hang waiting for a completion from a previous device instance.
Reproduction:
- grout:
grcli interface add port dpni.1 devargs fslmc:dpni.1
grcli interface del dpni.1
grcli interface add port dpni.1 devargs fslmc:dpni.1
-> Rx was stuck in qbman_check_command_complete(), now works.
- testpmd:
dpdk-testpmd -n1 -a fslmc:dpni.65535 -- -i --forward-mode=rxonly
testpmd> port attach fslmc:dpni.1
testpmd> port start all
testpmd> start
testpmd> stop
testpmd> port stop all
testpmd> port detach 0
testpmd> port attach fslmc:dpni.1
testpmd> port start all
testpmd> start
-> Rx was hanging, now runs normall
Fixes: 12d98eceb8ac ("bus/fslmc: enhance QBMAN DQ storage logic")
Cc: jun.yang at nxp.com
Cc: stable at dpdk.org
Signed-off-by: Maxime Leroy <maxime at leroys.fr>
---
.mailmap | 1 +
drivers/net/dpaa2/dpaa2_ethdev.c | 19 +++++++++++++++++++
2 files changed, 20 insertions(+)
diff --git a/.mailmap b/.mailmap
index 10c37a97a6..1f540f7f51 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1036,6 +1036,7 @@ Mauro Annarumma <mauroannarumma at hotmail.it>
Maxime Coquelin <maxime.coquelin at redhat.com>
Maxime Gouin <maxime.gouin at 6wind.com>
Maxime Leroy <maxime.leroy at 6wind.com>
+Maxime Leroy <maxime at leroys.fr>
Md Fahad Iqbal Polash <md.fahad.iqbal.polash at intel.com>
Megha Ajmera <megha.ajmera at intel.com>
Meijuan Zhao <meijuanx.zhao at intel.com>
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index f3db7982a4..3c18d58804 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -631,6 +631,24 @@ dpaa2_alloc_rx_tx_queues(struct rte_eth_dev *dev)
return ret;
}
+static void
+dpaa2_clear_queue_active_dps(struct dpaa2_queue *q, int num_lcores)
+{
+ int i;
+
+ for (i = 0; i < num_lcores; i++) {
+ struct queue_storage_info_t *qs = q->q_storage[i];
+
+ if (!qs)
+ continue;
+
+ if (qs->active_dqs) {
+ clear_swp_active_dqs(qs->active_dpio_id);
+ qs->active_dqs = NULL;
+ }
+ }
+}
+
static void
dpaa2_free_rx_tx_queues(struct rte_eth_dev *dev)
{
@@ -645,6 +663,7 @@ dpaa2_free_rx_tx_queues(struct rte_eth_dev *dev)
/* cleaning up queue storage */
for (i = 0; i < priv->nb_rx_queues; i++) {
dpaa2_q = priv->rx_vq[i];
+ dpaa2_clear_queue_active_dps(dpaa2_q, RTE_MAX_LCORE);
dpaa2_queue_storage_free(dpaa2_q,
RTE_MAX_LCORE);
}
--
2.25.1
More information about the dev
mailing list