patch 'net/gve: fix DQO TSO descriptor limit' has been queued to stable release 24.11.4
Kevin Traynor
ktraynor at redhat.com
Fri Oct 31 15:32:16 CET 2025
Hi,
FYI, your patch has been queued to stable release 24.11.4
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/05/25. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://github.com/kevintraynor/dpdk-stable
This queued commit can be viewed at:
https://github.com/kevintraynor/dpdk-stable/commit/54efb8dfcbfd2b1b0e3ee60ccaec6e1d98a90ddc
Thanks.
Kevin
---
>From 54efb8dfcbfd2b1b0e3ee60ccaec6e1d98a90ddc Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash at google.com>
Date: Mon, 7 Jul 2025 16:18:10 -0700
Subject: [PATCH] net/gve: fix DQO TSO descriptor limit
[ upstream commit be8f0eb81f987cbd64c2d37fe6f8b2e888328f23 ]
The DQ queue format expects that any MTU-sized packet or segment will
only cross at most 10 data descriptors.
In the non-TSO case, this means that a given packet simply can have at
most 10 descriptors.
In the TSO case, things are a bit more complex. For large TSO packets,
mbufs must be parsed and split into tso_segsz-sized (MSS) segments. For
any such MSS segment, the number of descriptors that would be used to
transmit the segment must be counted. The following restrictions apply
when counting descriptors:
1) Every TSO segment (including the very first) will be prepended by a
_separate_ data descriptor holding only header data,
2) The hardware can send at most up to 16K bytes in a single data
descriptor, and
3) The start of every mbuf counts as a separator between data
descriptors -- data is not assumed to be coalesced or copied
The value of nb_mtu_seg_max is set to GVE_TX_MAX_DATA_DESCS-1 to ensure
that the hidden extra prepended descriptor added to the beginning of
each segment in the TSO case is accounted for.
Fixes: 403c671a46b6 ("net/gve: support TSO in DQO RDA")
Signed-off-by: Joshua Washington <joshwash at google.com>
Reviewed-by: Ankit Garg <nktgrg at google.com>
---
drivers/net/gve/gve_ethdev.c | 2 +-
drivers/net/gve/gve_tx_dqo.c | 64 +++++++++++++++++++++++++++++++++++-
2 files changed, 64 insertions(+), 2 deletions(-)
diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index 5be4f64fde..82fde360b1 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -604,5 +604,5 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_min = priv->min_tx_desc_cnt,
.nb_align = 1,
- .nb_mtu_seg_max = GVE_TX_MAX_DATA_DESCS,
+ .nb_mtu_seg_max = GVE_TX_MAX_DATA_DESCS - 1,
};
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index 27f98cdeb3..3befbbcacb 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -81,4 +81,66 @@ gve_tx_clean_descs_dqo(struct gve_tx_queue *txq, uint16_t nb_descs) {
}
+/* GVE expects at most 10 data descriptors per mtu-sized segment. Beyond this,
+ * the hardware will assume the driver is malicious and stop transmitting
+ * packets altogether. Validate that a packet can be sent to avoid sending
+ * posting descriptors for an invalid packet.
+ */
+static inline bool
+gve_tx_validate_descs(struct rte_mbuf *tx_pkt, uint16_t nb_descs, bool is_tso)
+{
+ if (!is_tso)
+ return nb_descs <= GVE_TX_MAX_DATA_DESCS;
+
+ int tso_segsz = tx_pkt->tso_segsz;
+ int num_descs, seg_offset, mbuf_len;
+ int headlen = tx_pkt->l2_len + tx_pkt->l3_len + tx_pkt->l4_len;
+
+ /* Headers will be split into their own buffer. */
+ num_descs = 1;
+ seg_offset = 0;
+ mbuf_len = tx_pkt->data_len - headlen;
+
+ while (tx_pkt) {
+ if (!mbuf_len)
+ goto next_mbuf;
+
+ int seg_remain = tso_segsz - seg_offset;
+ if (num_descs == GVE_TX_MAX_DATA_DESCS && seg_remain)
+ return false;
+
+ if (seg_remain < mbuf_len) {
+ seg_offset = mbuf_len % tso_segsz;
+ /* The MSS is bound from above by 9728B, so a
+ * single TSO segment in the middle of an mbuf
+ * will be part of at most two descriptors, and
+ * is not at risk of defying this limitation.
+ * Thus, such segments are ignored.
+ */
+ int mbuf_remain = tx_pkt->data_len % GVE_TX_MAX_BUF_SIZE_DQO;
+
+ /* For each TSO segment, HW will prepend
+ * headers. The remaining bytes of this mbuf
+ * will be the start of the payload of the next
+ * TSO segment. In addition, if the final
+ * segment in this mbuf is divided between two
+ * descriptors, both must be counted.
+ */
+ num_descs = 1 + !!(seg_offset) +
+ (mbuf_remain < seg_offset && mbuf_remain);
+ } else {
+ seg_offset += mbuf_len;
+ num_descs++;
+ }
+
+next_mbuf:
+ tx_pkt = tx_pkt->next;
+ if (tx_pkt)
+ mbuf_len = tx_pkt->data_len;
+ }
+
+
+ return true;
+}
+
static uint16_t
gve_tx_pkt_nb_data_descs(struct rte_mbuf *tx_pkt)
@@ -167,5 +229,5 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Drop packet if it doesn't adhere to hardware limits. */
- if (!tso && nb_descs > GVE_TX_MAX_DATA_DESCS) {
+ if (!gve_tx_validate_descs(tx_pkt, nb_descs, tso)) {
txq->stats.too_many_descs++;
break;
--
2.51.0
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2025-10-31 13:53:52.713761670 +0000
+++ 0014-net-gve-fix-DQO-TSO-descriptor-limit.patch 2025-10-31 13:53:52.021523321 +0000
@@ -1 +1 @@
-From be8f0eb81f987cbd64c2d37fe6f8b2e888328f23 Mon Sep 17 00:00:00 2001
+From 54efb8dfcbfd2b1b0e3ee60ccaec6e1d98a90ddc Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit be8f0eb81f987cbd64c2d37fe6f8b2e888328f23 ]
+
@@ -30 +31,0 @@
-Cc: stable at dpdk.org
@@ -40 +41 @@
-index 81325ba98c..ef1c543aac 100644
+index 5be4f64fde..82fde360b1 100644
More information about the stable
mailing list