patch 'net/bonding: fix dedicated queue setup' has been queued to stable release 23.11.4
Xueming Li
xuemingl at nvidia.com
Tue Feb 18 13:34:05 CET 2025
Hi,
FYI, your patch has been queued to stable release 23.11.4
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
Please shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=eb465c251ea6f64cd49c6ac53a8e71cfb328ebd4
Thanks.
Xueming Li <xuemingl at nvidia.com>
---
>From eb465c251ea6f64cd49c6ac53a8e71cfb328ebd4 Mon Sep 17 00:00:00 2001
From: Long Wu <long.wu at corigine.com>
Date: Thu, 26 Dec 2024 09:26:18 +0800
Subject: [PATCH] net/bonding: fix dedicated queue setup
Cc: Xueming Li <xuemingl at nvidia.com>
[ upstream commit 4da0705bf896327af062212b5a1e6cb1f1366aa5 ]
The bonding PMD hardcoded the value of dedicated hardware Rx/Tx
queue size as (128/512). This will cause the bonding port start
fail if some NIC requires more Rx/Tx descriptors than the hardcoded
number.
Therefore, use the minimum hardware queue size of the member port
to initialize dedicated hardware Rx/Tx queue. If obtaining the
minimum queue size failed, use the default queue size.
Fixes: 112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control")
Signed-off-by: Long Wu <long.wu at corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he at corigine.com>
---
drivers/net/bonding/rte_eth_bond_8023ad.h | 3 +++
drivers/net/bonding/rte_eth_bond_pmd.c | 25 ++++++++++++++++++++---
2 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 4c280c7565..54e233f858 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,6 +35,9 @@ extern "C" {
#define MARKER_TLV_TYPE_INFO 0x01
#define MARKER_TLV_TYPE_RESP 0x02
+#define SLOW_TX_QUEUE_HW_DEFAULT_SIZE 512
+#define SLOW_RX_QUEUE_HW_DEFAULT_SIZE 512
+
typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
struct rte_mbuf *lacp_pkt);
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 4144c86be4..c3a761d0d4 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1685,10 +1685,26 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
}
if (internals->mode4.dedicated_queues.enabled == 1) {
- /* Configure slow Rx queue */
+ struct rte_eth_dev_info member_info = {};
+ uint16_t nb_rx_desc = SLOW_RX_QUEUE_HW_DEFAULT_SIZE;
+ uint16_t nb_tx_desc = SLOW_TX_QUEUE_HW_DEFAULT_SIZE;
+
+ errval = rte_eth_dev_info_get(member_eth_dev->data->port_id,
+ &member_info);
+ if (errval != 0) {
+ RTE_BOND_LOG(ERR,
+ "rte_eth_dev_info_get: port=%d, err (%d)",
+ member_eth_dev->data->port_id,
+ errval);
+ return errval;
+ }
+ if (member_info.rx_desc_lim.nb_min != 0)
+ nb_rx_desc = member_info.rx_desc_lim.nb_min;
+
+ /* Configure slow Rx queue */
errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
- internals->mode4.dedicated_queues.rx_qid, 128,
+ internals->mode4.dedicated_queues.rx_qid, nb_rx_desc,
rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL, port->slow_pool);
if (errval != 0) {
@@ -1700,8 +1716,11 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
return errval;
}
+ if (member_info.tx_desc_lim.nb_min != 0)
+ nb_tx_desc = member_info.tx_desc_lim.nb_min;
+
errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
- internals->mode4.dedicated_queues.tx_qid, 512,
+ internals->mode4.dedicated_queues.tx_qid, nb_tx_desc,
rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL);
if (errval != 0) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2025-02-18 19:39:01.106319465 +0800
+++ 0010-net-bonding-fix-dedicated-queue-setup.patch 2025-02-18 19:39:00.428244082 +0800
@@ -1 +1 @@
-From 4da0705bf896327af062212b5a1e6cb1f1366aa5 Mon Sep 17 00:00:00 2001
+From eb465c251ea6f64cd49c6ac53a8e71cfb328ebd4 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl at nvidia.com>
+
+[ upstream commit 4da0705bf896327af062212b5a1e6cb1f1366aa5 ]
@@ -16 +18,0 @@
-Cc: stable at dpdk.org
@@ -26 +28 @@
-index 395e41ff95..4c30bd40ee 100644
+index 4c280c7565..54e233f858 100644
@@ -40 +42 @@
-index f69496feec..467f7fe7ea 100644
+index 4144c86be4..c3a761d0d4 100644
@@ -43 +45 @@
-@@ -1684,10 +1684,26 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
+@@ -1685,10 +1685,26 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
@@ -72 +74 @@
-@@ -1699,8 +1715,11 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
+@@ -1700,8 +1716,11 @@ member_configure_slow_queue(struct rte_eth_dev *bonding_eth_dev,
More information about the stable
mailing list