patch 'net/gve: send whole packet when mbuf is large' has been queued to stable release 24.11.4

Kevin Traynor ktraynor at redhat.com
Fri Oct 31 15:32:11 CET 2025


Hi,

FYI, your patch has been queued to stable release 24.11.4

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/05/25. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/kevintraynor/dpdk-stable

This queued commit can be viewed at:
https://github.com/kevintraynor/dpdk-stable/commit/4a67e59c4a033935ec137522d328d522f0b23c4f

Thanks.

Kevin

---
>From 4a67e59c4a033935ec137522d328d522f0b23c4f Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash at google.com>
Date: Mon, 7 Jul 2025 16:18:05 -0700
Subject: [PATCH] net/gve: send whole packet when mbuf is large

[ upstream commit ee06313a50a8ebf18254a923152bf6729771cbc2 ]

Before this patch, only one descriptor would be written per mbuf in a
packet. In cases like TSO, it is possible for a single mbuf to have more
bytes than GVE_MAX_TX_BUF_SIZE_DQO. As such, instead of simply
truncating the data down to this size, the driver should actually write
descriptors for the rest of the buffers in the mbuf segment.

To that effect, the number of descriptors needed to send a packet must
be corrected to account for the potential additional descriptors.

Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO")

Signed-off-by: Joshua Washington <joshwash at google.com>
Reviewed-by: Ankit Garg <nktgrg at google.com>
---
 .mailmap                     |  1 +
 drivers/net/gve/gve_tx_dqo.c | 54 ++++++++++++++++++++++++------------
 2 files changed, 38 insertions(+), 17 deletions(-)

diff --git a/.mailmap b/.mailmap
index e19b01476e..be3cea2d34 100644
--- a/.mailmap
+++ b/.mailmap
@@ -125,4 +125,5 @@ Andy Moreton <andy.moreton at amd.com> <amoreton at xilinx.com> <amoreton at solarflare.c
 Andy Pei <andy.pei at intel.com>
 Anirudh Venkataramanan <anirudh.venkataramanan at intel.com>
+Ankit Garg <nktgrg at google.com>
 Ankur Dwivedi <adwivedi at marvell.com> <ankur.dwivedi at caviumnetworks.com> <ankur.dwivedi at cavium.com>
 Anna Lukin <annal at silicom.co.il>
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index 6984f92443..6227fa73b0 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -75,4 +75,17 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
 }
 
+static uint16_t
+gve_tx_pkt_nb_data_descs(struct rte_mbuf *tx_pkt)
+{
+	int nb_descs = 0;
+
+	while (tx_pkt) {
+		nb_descs += (GVE_TX_MAX_BUF_SIZE_DQO - 1 + tx_pkt->data_len) /
+			GVE_TX_MAX_BUF_SIZE_DQO;
+		tx_pkt = tx_pkt->next;
+	}
+	return nb_descs;
+}
+
 static inline void
 gve_tx_fill_seg_desc_dqo(volatile union gve_tx_desc_dqo *desc, struct rte_mbuf *tx_pkt)
@@ -98,5 +111,5 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	uint16_t nb_tx = 0;
 	uint64_t ol_flags;
-	uint16_t nb_used;
+	uint16_t nb_descs;
 	uint16_t tx_id;
 	uint16_t sw_id;
@@ -125,5 +138,4 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		ol_flags = tx_pkt->ol_flags;
-		nb_used = tx_pkt->nb_segs;
 		first_sw_id = sw_id;
 
@@ -131,6 +143,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		csum = !!(ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK_DQO);
 
-		nb_used += tso;
-		if (txq->nb_free < nb_used)
+		nb_descs = gve_tx_pkt_nb_data_descs(tx_pkt);
+		nb_descs += tso;
+		if (txq->nb_free < nb_descs)
 			break;
 
@@ -145,19 +158,26 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				PMD_DRV_LOG(DEBUG, "Overwriting an entry in sw_ring");
 
-			txd = &txr[tx_id];
 			sw_ring[sw_id] = tx_pkt;
 
-			/* fill Tx descriptor */
-			txd->pkt.buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
-			txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO;
-			txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id);
-			txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len, GVE_TX_MAX_BUF_SIZE_DQO);
-			txd->pkt.end_of_packet = 0;
-			txd->pkt.checksum_offload_enable = csum;
+			/* fill Tx descriptors */
+			int mbuf_offset = 0;
+			while (mbuf_offset < tx_pkt->data_len) {
+				uint64_t buf_addr = rte_mbuf_data_iova(tx_pkt) +
+					mbuf_offset;
+
+				txd = &txr[tx_id];
+				txd->pkt.buf_addr = rte_cpu_to_le_64(buf_addr);
+				txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id);
+				txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO;
+				txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len - mbuf_offset,
+							    GVE_TX_MAX_BUF_SIZE_DQO);
+				txd->pkt.end_of_packet = 0;
+				txd->pkt.checksum_offload_enable = csum;
+
+				mbuf_offset += txd->pkt.buf_size;
+				tx_id = (tx_id + 1) & mask;
+			}
 
-			/* size of desc_ring and sw_ring could be different */
-			tx_id = (tx_id + 1) & mask;
 			sw_id = (sw_id + 1) & sw_mask;
-
 			bytes += tx_pkt->data_len;
 			tx_pkt = tx_pkt->next;
@@ -167,6 +187,6 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		txd->pkt.end_of_packet = 1;
 
-		txq->nb_free -= nb_used;
-		txq->nb_used += nb_used;
+		txq->nb_free -= nb_descs;
+		txq->nb_used += nb_descs;
 	}
 
-- 
2.51.0

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2025-10-31 13:53:52.576512063 +0000
+++ 0009-net-gve-send-whole-packet-when-mbuf-is-large.patch	2025-10-31 13:53:52.015523303 +0000
@@ -1 +1 @@
-From ee06313a50a8ebf18254a923152bf6729771cbc2 Mon Sep 17 00:00:00 2001
+From 4a67e59c4a033935ec137522d328d522f0b23c4f Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit ee06313a50a8ebf18254a923152bf6729771cbc2 ]
+
@@ -16 +17,0 @@
-Cc: stable at dpdk.org
@@ -26 +27 @@
-index d4c04f3b81..6e5234223b 100644
+index e19b01476e..be3cea2d34 100644
@@ -29 +30 @@
-@@ -126,4 +126,5 @@ Andy Moreton <andy.moreton at amd.com> <amoreton at solarflare.com>
+@@ -125,4 +125,5 @@ Andy Moreton <andy.moreton at amd.com> <amoreton at xilinx.com> <amoreton at solarflare.c
@@ -33,2 +34,2 @@
- Ankur Dwivedi <adwivedi at marvell.com> <ankur.dwivedi at caviumnetworks.com>
- Ankur Dwivedi <adwivedi at marvell.com> <ankur.dwivedi at cavium.com>
+ Ankur Dwivedi <adwivedi at marvell.com> <ankur.dwivedi at caviumnetworks.com> <ankur.dwivedi at cavium.com>
+ Anna Lukin <annal at silicom.co.il>



More information about the stable mailing list