[PATCH 2/2] net/ngbe: add proper memory barriers in Rx
Jiawen Wu
jiawenwu at trustnetic.com
Mon Oct 30 11:51:44 CET 2023
Refer to commit 85e46c532bc7 ("net/ixgbe: add proper memory barriers in
Rx"). Fix the same issue as ixgbe.
Although due to the testing schedule, the current test has not found this
problem. We also do the same fix in ngbe, to ensure the read ordering be
correct.
Fixes: 79f3128d4d98 ("net/ngbe: support scattered Rx")
Cc: stable at dpdk.org
Signed-off-by: Jiawen Wu <jiawenwu at trustnetic.com>
---
drivers/net/ngbe/ngbe_rxtx.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index ec353a30b1..54a6f6a887 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -1223,11 +1223,22 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* of accesses cannot be reordered by the compiler. If they were
* not volatile, they could be reordered which could lead to
* using invalid descriptor fields when read from rxd.
+ *
+ * Meanwhile, to prevent the CPU from executing out of order, we
+ * need to use a proper memory barrier to ensure the memory
+ * ordering below.
*/
rxdp = &rx_ring[rx_id];
staterr = rxdp->qw1.lo.status;
if (!(staterr & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)))
break;
+
+ /*
+ * Use acquire fence to ensure that status_error which includes
+ * DD bit is loaded before loading of other descriptor words.
+ */
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+
rxd = *rxdp;
/*
@@ -1454,6 +1465,12 @@ ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
if (!(staterr & NGBE_RXD_STAT_DD))
break;
+ /*
+ * Use acquire fence to ensure that status_error which includes
+ * DD bit is loaded before loading of other descriptor words.
+ */
+ rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+
rxd = *rxdp;
PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
--
2.27.0
More information about the dev
mailing list