patch 'net/mlx5: fix Rx queue reference count in flushing flows' has been queued to stable release 23.11.3
Xueming Li
xuemingl at nvidia.com
Sat Dec 7 09:00:32 CET 2024
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=95cad5da69983fed51d4ff6b2636bface18c53c8
Thanks.
Xueming Li <xuemingl at nvidia.com>
---
>From 95cad5da69983fed51d4ff6b2636bface18c53c8 Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz at nvidia.com>
Date: Wed, 13 Nov 2024 09:22:44 +0200
Subject: [PATCH] net/mlx5: fix Rx queue reference count in flushing flows
Cc: Xueming Li <xuemingl at nvidia.com>
[ upstream commit 1ea333d2de220d5bad600ed50b43f91f7703c123 ]
Some indirect table and hrxq is created in the rule creation with
QUEUE or RSS action. When stopping a port, the 'dev_started' is set
to 0 in the beginning. The mlx5_ind_table_obj_release() should still
do the dereference of the queue(s) when it is called in the polling
of flow rule deletion, due to the fact that a flow with Q/RSS action
is always referring to the active Rx queues.
The callback now can only pass one input parameter. Using a global
flag per device to indicate that the user flows flushing is in
progress. Then the reference count of the queue(s) should be
decreased.
Fixes: 3a2f674b6aa8 ("net/mlx5: add queue and RSS HW steering action")
Signed-off-by: Bing Zhao <bingz at nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski at nvidia.com>
---
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow.c | 2 ++
drivers/net/mlx5/mlx5_rxq.c | 8 +++++---
3 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9a6bd976c2..55c29e31a2 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1938,6 +1938,7 @@ struct mlx5_priv {
uint32_t hws_mark_refcnt; /* HWS mark action reference counter. */
struct rte_pmd_mlx5_flow_engine_mode_info mode_info; /* Process set flow engine info. */
struct mlx5_flow_hw_attr *hw_attr; /* HW Steering port configuration. */
+ bool hws_rule_flushing; /**< Whether this port is in rules flushing stage. */
#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
/* Item template list. */
LIST_HEAD(flow_hw_itt, rte_flow_pattern_template) flow_hw_itt;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 6286eef010..1e9484f372 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -8101,7 +8101,9 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type,
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
if (priv->sh->config.dv_flow_en == 2 &&
type == MLX5_FLOW_TYPE_GEN) {
+ priv->hws_rule_flushing = true;
flow_hw_q_flow_flush(dev, NULL);
+ priv->hws_rule_flushing = false;
return;
}
#endif
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index aa2e8fd9e3..6d28bcb57c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2895,6 +2895,7 @@ static void
__mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ bool deref_rxqs = true;
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
if (hrxq->hws_flags)
@@ -2904,9 +2905,10 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq)
#endif
priv->obj_ops.hrxq_destroy(hrxq);
if (!hrxq->standalone) {
- mlx5_ind_table_obj_release(dev, hrxq->ind_table,
- hrxq->hws_flags ?
- (!!dev->data->dev_started) : true);
+ if (!dev->data->dev_started && hrxq->hws_flags &&
+ !priv->hws_rule_flushing)
+ deref_rxqs = false;
+ mlx5_ind_table_obj_release(dev, hrxq->ind_table, deref_rxqs);
}
mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq->idx);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.579902802 +0800
+++ 0074-net-mlx5-fix-Rx-queue-reference-count-in-flushing-fl.patch 2024-12-06 23:26:44.073044826 +0800
@@ -1 +1 @@
-From 1ea333d2de220d5bad600ed50b43f91f7703c123 Mon Sep 17 00:00:00 2001
+From 95cad5da69983fed51d4ff6b2636bface18c53c8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl at nvidia.com>
+
+[ upstream commit 1ea333d2de220d5bad600ed50b43f91f7703c123 ]
@@ -19 +21,0 @@
-Cc: stable at dpdk.org
@@ -25 +27 @@
- drivers/net/mlx5/mlx5_flow.c | 3 +++
+ drivers/net/mlx5/mlx5_flow.c | 2 ++
@@ -27 +29 @@
- 3 files changed, 9 insertions(+), 3 deletions(-)
+ 3 files changed, 8 insertions(+), 3 deletions(-)
@@ -30 +32 @@
-index 6e8295110e..89d277b523 100644
+index 9a6bd976c2..55c29e31a2 100644
@@ -33,2 +35,2 @@
-@@ -2060,6 +2060,7 @@ struct mlx5_priv {
- RTE_ATOMIC(uint32_t) hws_mark_refcnt; /* HWS mark action reference counter. */
+@@ -1938,6 +1938,7 @@ struct mlx5_priv {
+ uint32_t hws_mark_refcnt; /* HWS mark action reference counter. */
@@ -42 +44 @@
-index d631ed150c..16ddd05448 100644
+index 6286eef010..1e9484f372 100644
@@ -45 +47 @@
-@@ -8118,7 +8118,10 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type,
+@@ -8101,7 +8101,9 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type,
@@ -52 +54 @@
-+ return;
+ return;
@@ -55 +56,0 @@
- MLX5_IPOOL_FOREACH(priv->flows[type], fidx, flow) {
@@ -57 +58 @@
-index d437835b73..0737f60272 100644
+index aa2e8fd9e3..6d28bcb57c 100644
@@ -60 +61 @@
-@@ -2894,6 +2894,7 @@ static void
+@@ -2895,6 +2895,7 @@ static void
@@ -68 +69 @@
-@@ -2903,9 +2904,10 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq)
+@@ -2904,9 +2905,10 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq)
More information about the stable
mailing list