patch 'net/mlx5: store MTU at Rx queue allocation time' has been queued to stable release 22.11.11

luca.boccassi at gmail.com luca.boccassi at gmail.com
Wed Nov 12 17:52:52 CET 2025


Hi,

FYI, your patch has been queued to stable release 22.11.11

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/14/25. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/bluca/dpdk-stable

This queued commit can be viewed at:
https://github.com/bluca/dpdk-stable/commit/a4a4d8f3386a0e32ecbd808689561c70747fcb71

Thanks.

Luca Boccassi

---
>From a4a4d8f3386a0e32ecbd808689561c70747fcb71 Mon Sep 17 00:00:00 2001
From: Adrian Schollmeyer <a.schollmeyer at syseleven.de>
Date: Thu, 30 Oct 2025 10:13:13 +0100
Subject: [PATCH] net/mlx5: store MTU at Rx queue allocation time

[ upstream commit 4414eb800708475bf1b38794434e590c7204d9d3 ]

For shared Rx queues, equal MTU for all ports sharing queues is enforced
using mlx5_shared_rxq_match() to make sure, the memory allocated in the
Rx buffer is large enough. The check uses the MTU as reported by the
ports' private dev_data structs, which contain the MTU currently set for
the device. In case one port's MTU is altered after Rx queues are
allocated and then a second port joins the shared Rx queue with the old,
yet correct MTU, the check fails despite the fact that the Rx buffer
size is correct for both ports.

This patch adds a new entry to the Rx queue control structure that
captures the MTU at the time the Rx buffer was allocated, since this is
the relevant information that needs to be checked when a port joins a
shared Rx queue.

Fixes: 09c2555303be ("net/mlx5: support shared Rx queue")

Signed-off-by: Adrian Schollmeyer <a.schollmeyer at syseleven.de>
Acked-by: Dariusz Sosnowski <dsosnowski at nvidia.com>
---
 .mailmap                    | 1 +
 drivers/net/mlx5/mlx5_rx.h  | 1 +
 drivers/net/mlx5/mlx5_rxq.c | 6 +++++-
 3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/.mailmap b/.mailmap
index ddc091adac..f59db4fb3a 100644
--- a/.mailmap
+++ b/.mailmap
@@ -18,6 +18,7 @@ Adam Ludkiewicz <adam.ludkiewicz at intel.com>
 Adham Masarwah <adham at nvidia.com> <adham at mellanox.com>
 Adrian Moreno <amorenoz at redhat.com>
 Adrian Podlawski <adrian.podlawski at intel.com>
+Adrian Schollmeyer <a.schollmeyer at syseleven.de>
 Adrien Mazarguil <adrien.mazarguil at 6wind.com>
 Ady Agbarih <adypodoman at gmail.com>
 Agalya Babu RadhaKrishnan <agalyax.babu.radhakrishnan at intel.com>
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 588d83a073..1467e1dd49 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -154,6 +154,7 @@ struct mlx5_rxq_data {
 /* RX queue control descriptor. */
 struct mlx5_rxq_ctrl {
 	struct mlx5_rxq_data rxq; /* Data path structure. */
+	uint16_t mtu; /* Original MTU that the queue was allocated with. */
 	LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */
 	LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
 	struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b1834aac7c..040273486f 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -776,7 +776,7 @@ mlx5_shared_rxq_match(struct mlx5_rxq_ctrl *rxq_ctrl, struct rte_eth_dev *dev,
 			dev->data->port_id, idx);
 		return false;
 	}
-	if (priv->mtu != spriv->mtu) {
+	if (priv->mtu != rxq_ctrl->mtu) {
 		DRV_LOG(ERR, "port %u queue index %u failed to join shared group: mtu mismatch",
 			dev->data->port_id, idx);
 		return false;
@@ -1765,6 +1765,10 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	LIST_INIT(&tmpl->owners);
 	MLX5_ASSERT(n_seg && n_seg <= MLX5_MAX_RXQ_NSEG);
+	/*
+	 * Save the original MTU to check against for shared rx queues.
+	 */
+	tmpl->mtu = dev->data->mtu;
 	/*
 	 * Save the original segment configuration in the shared queue
 	 * descriptor for the later check on the sibling queue creation.
-- 
2.47.3

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2025-11-12 16:20:42.402144512 +0000
+++ 0038-net-mlx5-store-MTU-at-Rx-queue-allocation-time.patch	2025-11-12 16:20:40.971718030 +0000
@@ -1 +1 @@
-From 4414eb800708475bf1b38794434e590c7204d9d3 Mon Sep 17 00:00:00 2001
+From a4a4d8f3386a0e32ecbd808689561c70747fcb71 Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit 4414eb800708475bf1b38794434e590c7204d9d3 ]
+
@@ -21 +22,0 @@
-Cc: stable at dpdk.org
@@ -32 +33 @@
-index 1fb3fb5128..50a59a596a 100644
+index ddc091adac..f59db4fb3a 100644
@@ -35 +36,2 @@
-@@ -21,6 +21,7 @@ Adham Masarwah <adham at nvidia.com> <adham at mellanox.com>
+@@ -18,6 +18,7 @@ Adam Ludkiewicz <adam.ludkiewicz at intel.com>
+ Adham Masarwah <adham at nvidia.com> <adham at mellanox.com>
@@ -37 +38,0 @@
- Adrian Pielech <adrian.pielech at intel.com>
@@ -44 +45 @@
-index 7be31066a5..127abe41fb 100644
+index 588d83a073..1467e1dd49 100644
@@ -47 +48 @@
-@@ -176,6 +176,7 @@ struct __rte_cache_aligned mlx5_rxq_data {
+@@ -154,6 +154,7 @@ struct mlx5_rxq_data {
@@ -56 +57 @@
-index 1425886a22..2264dea877 100644
+index b1834aac7c..040273486f 100644
@@ -59 +60 @@
-@@ -780,7 +780,7 @@ mlx5_shared_rxq_match(struct mlx5_rxq_ctrl *rxq_ctrl, struct rte_eth_dev *dev,
+@@ -776,7 +776,7 @@ mlx5_shared_rxq_match(struct mlx5_rxq_ctrl *rxq_ctrl, struct rte_eth_dev *dev,
@@ -68 +69 @@
-@@ -1812,6 +1812,10 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+@@ -1765,6 +1765,10 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,


More information about the stable mailing list