patch 'net/gve: add IO memory barriers before reading descriptors' has been queued to stable release 23.11.3

Xueming Li xuemingl at nvidia.com
Mon Nov 11 07:28:19 CET 2024


Hi,

FYI, your patch has been queued to stable release 23.11.3

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging

This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1c6a6173878598d07e8fbfef9fe64114a89cb003

Thanks.

Xueming Li <xuemingl at nvidia.com>

---
>From 1c6a6173878598d07e8fbfef9fe64114a89cb003 Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash at google.com>
Date: Thu, 3 Oct 2024 18:05:35 -0700
Subject: [PATCH] net/gve: add IO memory barriers before reading descriptors
Cc: Xueming Li <xuemingl at nvidia.com>

[ upstream commit f8fee84eb48cdf13a7a29f5851a2e2a41045813a ]

Without memory barriers, there is no guarantee that the CPU will
actually wait until after the descriptor has been fully written before
loading descriptor data. In this case, it is possible that stale data is
read and acted on by the driver when processing TX or RX completions.

This change adds read memory barriers just after the generation bit is
read in both the RX and the TX path to ensure that the NIC has properly
passed ownership to the driver before descriptor data is read in full.

Note that memory barriers should not be needed after writing the RX
buffer queue/TX descriptor queue tails because rte_write32 includes an
implicit write memory barrier.

Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO")
Fixes: 45da16b5b181 ("net/gve: support basic Rx data path for DQO")

Signed-off-by: Joshua Washington <joshwash at google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi at google.com>
Reviewed-by: Rushil Gupta <rushilg at google.com>
---
 drivers/net/gve/gve_rx_dqo.c | 2 ++
 drivers/net/gve/gve_tx_dqo.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index f55a03f8c4..3f694a4d9a 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -72,6 +72,8 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		if (rx_desc->generation != rxq->cur_gen_bit)
 			break;

+		rte_io_rmb();
+
 		if (unlikely(rx_desc->rx_error)) {
 			rxq->stats.errors++;
 			continue;
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index b9d6d01749..ce3681b6c6 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -24,6 +24,8 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
 	if (compl_desc->generation != txq->cur_gen_bit)
 		return;

+	rte_io_rmb();
+
 	compl_tag = rte_le_to_cpu_16(compl_desc->completion_tag);

 	aim_txq = txq->txqs[compl_desc->id];
--
2.34.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2024-11-11 14:23:09.326330154 +0800
+++ 0093-net-gve-add-IO-memory-barriers-before-reading-descri.patch	2024-11-11 14:23:05.242192837 +0800
@@ -1 +1 @@
-From f8fee84eb48cdf13a7a29f5851a2e2a41045813a Mon Sep 17 00:00:00 2001
+From 1c6a6173878598d07e8fbfef9fe64114a89cb003 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl at nvidia.com>
+
+[ upstream commit f8fee84eb48cdf13a7a29f5851a2e2a41045813a ]
@@ -21 +23,0 @@
-Cc: stable at dpdk.org
@@ -32 +34 @@
-index 5371bab77d..285c6ddd61 100644
+index f55a03f8c4..3f694a4d9a 100644
@@ -35 +37 @@
-@@ -132,6 +132,8 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+@@ -72,6 +72,8 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
@@ -45 +47 @@
-index 731c287224..6984f92443 100644
+index b9d6d01749..ce3681b6c6 100644


More information about the stable mailing list