patch 'net/txgbe: reduce memory size of ring descriptors' has been queued to stable release 24.11.4
Kevin Traynor
ktraynor at redhat.com
Fri Nov 21 12:20:08 CET 2025
Hi,
FYI, your patch has been queued to stable release 24.11.4
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/26/25. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://github.com/kevintraynor/dpdk-stable
This queued commit can be viewed at:
https://github.com/kevintraynor/dpdk-stable/commit/d70a651fa8aa4fffb488a89cb068b5234225ef14
Thanks.
Kevin
---
>From d70a651fa8aa4fffb488a89cb068b5234225ef14 Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu at trustnetic.com>
Date: Mon, 27 Oct 2025 11:15:26 +0800
Subject: [PATCH] net/txgbe: reduce memory size of ring descriptors
[ upstream commit 843c59d1c2cef10a75037ebc73460f2ed28f9839 ]
The memory of ring descriptors was allocated in size of the maximum ring
size. It seems not friendly to our hardware on some domestic platforms.
Change it to allocate in size of the real ring size.
Fixes: 226bf98eda87 ("net/txgbe: add Rx and Tx queues setup and release")
Signed-off-by: Jiawen Wu <jiawenwu at trustnetic.com>
---
drivers/net/txgbe/txgbe_rxtx.c | 20 +++++++-------------
1 file changed, 7 insertions(+), 13 deletions(-)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 2f4690ec61..e2ac091dfe 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2468,11 +2468,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
- /*
- * Allocate TX ring hardware descriptors. A memzone large enough to
- * handle the maximum ring size is allocated in order to allow for
- * resizing in later calls to the queue setup function.
- */
+ /* Allocate TX ring hardware descriptors. */
tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
- sizeof(struct txgbe_tx_desc) * TXGBE_RING_DESC_MAX,
+ sizeof(struct txgbe_tx_desc) * nb_desc,
TXGBE_ALIGN, socket_id);
if (tz == NULL) {
@@ -2725,4 +2721,5 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
uint64_t offloads;
+ uint32_t size;
PMD_INIT_FUNC_TRACE();
@@ -2775,11 +2772,8 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->pkt_type_mask = TXGBE_PTID_MASK;
- /*
- * Allocate RX ring hardware descriptors. A memzone large enough to
- * handle the maximum ring size is allocated in order to allow for
- * resizing in later calls to the queue setup function.
- */
+ /* Allocate RX ring hardware descriptors. */
+ size = (nb_desc + RTE_PMD_TXGBE_RX_MAX_BURST) * sizeof(struct txgbe_rx_desc);
rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
- RX_RING_SZ, TXGBE_ALIGN, socket_id);
+ size, TXGBE_ALIGN, socket_id);
if (rz == NULL) {
txgbe_rx_queue_release(rxq);
@@ -2791,5 +2785,5 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
* Zero init all the descriptors in the ring.
*/
- memset(rz->addr, 0, RX_RING_SZ);
+ memset(rz->addr, 0, size);
/*
--
2.51.0
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2025-11-21 11:05:10.367975227 +0000
+++ 0024-net-txgbe-reduce-memory-size-of-ring-descriptors.patch 2025-11-21 11:05:09.406200926 +0000
@@ -1 +1 @@
-From 843c59d1c2cef10a75037ebc73460f2ed28f9839 Mon Sep 17 00:00:00 2001
+From d70a651fa8aa4fffb488a89cb068b5234225ef14 Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit 843c59d1c2cef10a75037ebc73460f2ed28f9839 ]
+
@@ -11 +12,0 @@
-Cc: stable at dpdk.org
@@ -19 +20 @@
-index c606180741..d77db1efa2 100644
+index 2f4690ec61..e2ac091dfe 100644
@@ -22 +23 @@
-@@ -2522,11 +2522,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
+@@ -2468,11 +2468,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
@@ -36 +37 @@
-@@ -2782,4 +2778,5 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
+@@ -2725,4 +2721,5 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
@@ -42 +43 @@
-@@ -2832,11 +2829,8 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
+@@ -2775,11 +2772,8 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
@@ -57 +58 @@
-@@ -2848,5 +2842,5 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
+@@ -2791,5 +2785,5 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
More information about the stable
mailing list