patch 'net/gve: clean when insufficient Tx descriptors' has been queued to stable release 24.11.4
    Kevin Traynor 
    ktraynor at redhat.com
       
    Fri Oct 31 15:32:12 CET 2025
    
    
  
Hi,
FYI, your patch has been queued to stable release 24.11.4
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/05/25. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://github.com/kevintraynor/dpdk-stable
This queued commit can be viewed at:
https://github.com/kevintraynor/dpdk-stable/commit/d04632254ef040dd4c79edf95b10a6e0e5902a8d
Thanks.
Kevin
---
>From d04632254ef040dd4c79edf95b10a6e0e5902a8d Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash at google.com>
Date: Mon, 7 Jul 2025 16:18:06 -0700
Subject: [PATCH] net/gve: clean when insufficient Tx descriptors
[ upstream commit 92d330a3eabb1ca2f74d494ebea0104bc7fd081f ]
A single packet can technically require more than 32 (free_thresh)
descriptors to send. Count the number of descriptors needed to send out
a packet in DQO Tx, and ensure that there are enough descriptors in the
ring before writing. If there are not enough free descriptors, drop the
packet and increment drop counters.
Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO")
Signed-off-by: Joshua Washington <joshwash at google.com>
Reviewed-by: Ankit Garg <nktgrg at google.com>
---
 drivers/net/gve/gve_tx_dqo.c | 27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index 6227fa73b0..652a0e5175 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -75,4 +75,10 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
 }
 
+static inline void
+gve_tx_clean_descs_dqo(struct gve_tx_queue *txq, uint16_t nb_descs) {
+	while (--nb_descs)
+		gve_tx_clean_dqo(txq);
+}
+
 static uint16_t
 gve_tx_pkt_nb_data_descs(struct rte_mbuf *tx_pkt)
@@ -108,5 +114,4 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	struct rte_mbuf *tx_pkt;
 	uint16_t mask, sw_mask;
-	uint16_t nb_to_clean;
 	uint16_t nb_tx = 0;
 	uint64_t ol_flags;
@@ -131,9 +136,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		tx_pkt = tx_pkts[nb_tx];
 
-		if (txq->nb_free <= txq->free_thresh) {
-			nb_to_clean = DQO_TX_MULTIPLIER * txq->rs_thresh;
-			while (nb_to_clean--)
-				gve_tx_clean_dqo(txq);
-		}
+		if (txq->nb_free <= txq->free_thresh)
+			gve_tx_clean_descs_dqo(txq, DQO_TX_MULTIPLIER *
+					       txq->rs_thresh);
 
 		ol_flags = tx_pkt->ol_flags;
@@ -145,6 +148,14 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nb_descs = gve_tx_pkt_nb_data_descs(tx_pkt);
 		nb_descs += tso;
-		if (txq->nb_free < nb_descs)
-			break;
+
+		/* Clean if there aren't enough descriptors to send the packet. */
+		if (unlikely(txq->nb_free < nb_descs)) {
+			int nb_to_clean = RTE_MAX(DQO_TX_MULTIPLIER * txq->rs_thresh,
+						  nb_descs);
+
+			gve_tx_clean_descs_dqo(txq, nb_to_clean);
+			if (txq->nb_free < nb_descs)
+				break;
+		}
 
 		if (tso) {
-- 
2.51.0
---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2025-10-31 13:53:52.603613932 +0000
+++ 0010-net-gve-clean-when-insufficient-Tx-descriptors.patch	2025-10-31 13:53:52.016523306 +0000
@@ -1 +1 @@
-From 92d330a3eabb1ca2f74d494ebea0104bc7fd081f Mon Sep 17 00:00:00 2001
+From d04632254ef040dd4c79edf95b10a6e0e5902a8d Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit 92d330a3eabb1ca2f74d494ebea0104bc7fd081f ]
+
@@ -13 +14,0 @@
-Cc: stable at dpdk.org
    
    
More information about the stable
mailing list