patch 'net/mlx5: fix polling CQEs' has been queued to stable release 22.11.8

luca.boccassi at gmail.com luca.boccassi at gmail.com
Mon Feb 17 18:03:45 CET 2025


Hi,

FYI, your patch has been queued to stable release 22.11.8

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 02/19/25. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/bluca/dpdk-stable

This queued commit can be viewed at:
https://github.com/bluca/dpdk-stable/commit/afbe03428db62e75c98afe1aa492b5d252fe8e56

Thanks.

Luca Boccassi

---
>From afbe03428db62e75c98afe1aa492b5d252fe8e56 Mon Sep 17 00:00:00 2001
From: Gavin Hu <gahu at nvidia.com>
Date: Fri, 6 Dec 2024 02:58:11 +0200
Subject: [PATCH] net/mlx5: fix polling CQEs

[ upstream commit 73f7ae1d721aa5c388123db11827937205985999 ]

In certain situations, the receive queue (rxq) fails to replenish its
internal ring with memory buffers (mbufs) from the pool. This can happen
when the pool has a limited number of mbufs allocated, and the user
application holds incoming packets for an extended period, resulting in a
delayed release of mbufs. Consequently, the pool becomes depleted,
preventing the rxq from replenishing from it.

There was a bug in the behavior of the vectorized rxq_cq_process_v routine,
which handled completion queue entries (CQEs) in batches of four. This
routine consistently accessed four mbufs from the internal queue ring,
regardless of whether they had been replenished. As a result, it could
access mbufs that no longer belonged to the poll mode driver (PMD).

The fix involves checking if there are four replenished mbufs available
before allowing rxq_cq_process_v to handle the batch. Once replenishment
succeeds during the polling process, the routine will resume its operation.

Fixes: 1ded26239aa0 ("net/mlx5: refactor vectorized Rx")

Reported-by: Changqi Dingluo <dingluochangqi.ck at bytedance.com>
Signed-off-by: Gavin Hu <gahu at nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo at nvidia.com>
---
 .mailmap                         | 4 +++-
 drivers/net/mlx5/mlx5_rxtx_vec.c | 3 +++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/.mailmap b/.mailmap
index df8e5c48ab..51c2bc9df3 100644
--- a/.mailmap
+++ b/.mailmap
@@ -200,6 +200,7 @@ Chaitanya Babu Talluri <tallurix.chaitanya.babu at intel.com>
 Chandubabu Namburu <chandu at amd.com>
 Changchun Ouyang <changchun.ouyang at intel.com>
 Changpeng Liu <changpeng.liu at intel.com>
+Changqi Dingluo <dingluochangqi.ck at bytedance.com>
 Changqing Wu <changqingx.wu at intel.com>
 Chaoyong He <chaoyong.he at corigine.com>
 Chao Zhu <chaozhu at linux.vnet.ibm.com> <bjzhuc at cn.ibm.com>
@@ -424,7 +425,8 @@ Gargi Sau <gargi.sau at intel.com>
 Gary Mussar <gmussar at ciena.com>
 Gaurav Singh <gaurav1086 at gmail.com>
 Gautam Dawar <gdawar at solarflare.com>
-Gavin Hu <gavin.hu at arm.com> <gavin.hu at linaro.org>
+Gavin Hu <gahu at nvidia.com> <gavin.hu at arm.com> <gavin.hu at linaro.org>
+Gavin Li <gavinl at nvidia.com>
 Geoffrey Le Gourriérec <geoffrey.le_gourrierec at 6wind.com>
 Geoffrey Lv <geoffrey.lv at gmail.com>
 Geoff Thorpe <geoff.thorpe at nxp.com>
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 667475a93e..f37e9e104e 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -324,6 +324,9 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,
 	/* Not to cross queue end. */
 	pkts_n = RTE_MIN(pkts_n, q_n - elts_idx);
 	pkts_n = RTE_MIN(pkts_n, q_n - cq_idx);
+	/* Not to move past the allocated mbufs. */
+	pkts_n = RTE_MIN(pkts_n, RTE_ALIGN_FLOOR(rxq->rq_ci - rxq->rq_pi,
+						MLX5_VPMD_DESCS_PER_LOOP));
 	if (!pkts_n) {
 		*no_cq = !rcvd_pkt;
 		return rcvd_pkt;
-- 
2.47.2

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2025-02-17 16:13:17.321175369 +0000
+++ 0012-net-mlx5-fix-polling-CQEs.patch	2025-02-17 16:13:16.794441593 +0000
@@ -1 +1 @@
-From 73f7ae1d721aa5c388123db11827937205985999 Mon Sep 17 00:00:00 2001
+From afbe03428db62e75c98afe1aa492b5d252fe8e56 Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit 73f7ae1d721aa5c388123db11827937205985999 ]
+
@@ -24 +25,0 @@
-Cc: stable at dpdk.org
@@ -30 +31 @@
- .mailmap                         | 3 ++-
+ .mailmap                         | 4 +++-
@@ -32 +33 @@
- 2 files changed, 5 insertions(+), 1 deletion(-)
+ 2 files changed, 6 insertions(+), 1 deletion(-)
@@ -35 +36 @@
-index 38e511a28b..1ed47e1cad 100644
+index df8e5c48ab..51c2bc9df3 100644
@@ -38,2 +39,2 @@
-@@ -225,6 +225,7 @@ Chandubabu Namburu <chandu at amd.com>
- Chang Miao <chang.miao at corigine.com>
+@@ -200,6 +200,7 @@ Chaitanya Babu Talluri <tallurix.chaitanya.babu at intel.com>
+ Chandubabu Namburu <chandu at amd.com>
@@ -46 +47 @@
-@@ -465,7 +466,7 @@ Gargi Sau <gargi.sau at intel.com>
+@@ -424,7 +425,8 @@ Gargi Sau <gargi.sau at intel.com>
@@ -52 +53 @@
- Gavin Li <gavinl at nvidia.com>
++Gavin Li <gavinl at nvidia.com>
@@ -54,0 +56 @@
+ Geoff Thorpe <geoff.thorpe at nxp.com>
@@ -56 +58 @@
-index 1872bf310c..1b701801c5 100644
+index 667475a93e..f37e9e104e 100644
@@ -59 +61 @@
-@@ -325,6 +325,9 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,
+@@ -324,6 +324,9 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,


More information about the stable mailing list