patch 'net/mlx5: fix VXLAN matching with zero value' has been queued to stable release 22.11.3

Xueming Li xuemingl at nvidia.com
Sun Jun 25 08:35:07 CEST 2023


Hi,

FYI, your patch has been queued to stable release 22.11.3

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 06/27/23. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=22.11-staging

This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=22.11-staging&id=eb02902423e140d7a9627bb377026ded42628fe1

Thanks.

Xueming Li <xuemingl at nvidia.com>

---
>From eb02902423e140d7a9627bb377026ded42628fe1 Mon Sep 17 00:00:00 2001
From: Rongwei Liu <rongweil at nvidia.com>
Date: Tue, 16 May 2023 08:40:53 +0300
Subject: [PATCH] net/mlx5: fix VXLAN matching with zero value
Cc: Xueming Li <xuemingl at nvidia.com>

[ upstream commit 40c78a1f76cdbf9d0e1002d603b5d381d2e0a6b4 ]

When an application wants to match VxLAN last_rsvd value zero,
PMD sets the matching mask field to zero by mistake and it causes
traffic with any last_rsvd value hits. The matching mask should be
taken from application input directly, no need to perform the bit reset
operation.

Fixes: cd4ab742064a ("net/mlx5: split flow item matcher and value translation")

Signed-off-by: Rongwei Liu <rongweil at nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo at nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 19 ++-----------------
 1 file changed, 2 insertions(+), 17 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 12db56f173..485afdf5ca 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9223,12 +9223,10 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 {
 	const struct rte_flow_item_vxlan *vxlan_m;
 	const struct rte_flow_item_vxlan *vxlan_v;
-	const struct rte_flow_item_vxlan *vxlan_vv = item->spec;
 	void *headers_v;
 	void *misc_v;
 	void *misc5_v;
 	uint32_t tunnel_v;
-	uint32_t *tunnel_header_v;
 	char *vni_v;
 	uint16_t dport;
 	int size;
@@ -9280,24 +9278,11 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
 			vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
 		return;
 	}
-	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
-						   misc5_v,
-						   tunnel_header_1);
 	tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
 		   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
 		   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
-	*tunnel_header_v = tunnel_v;
-	if (key_type == MLX5_SET_MATCHER_SW_M) {
-		tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) |
-			   (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 |
-			   (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16;
-		if (!tunnel_v)
-			*tunnel_header_v = 0x0;
-		if (vxlan_vv->rsvd1 & vxlan_m->rsvd1)
-			*tunnel_header_v |= vxlan_v->rsvd1 << 24;
-	} else {
-		*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
-	}
+	tunnel_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+	MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_1, RTE_BE32(tunnel_v));
 }
 
 /**
-- 
2.25.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2023-06-25 14:32:00.890993700 +0800
+++ 0089-net-mlx5-fix-VXLAN-matching-with-zero-value.patch	2023-06-25 14:31:58.495773900 +0800
@@ -1 +1 @@
-From 40c78a1f76cdbf9d0e1002d603b5d381d2e0a6b4 Mon Sep 17 00:00:00 2001
+From eb02902423e140d7a9627bb377026ded42628fe1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl at nvidia.com>
+
+[ upstream commit 40c78a1f76cdbf9d0e1002d603b5d381d2e0a6b4 ]
@@ -13 +15,0 @@
-Cc: stable at dpdk.org
@@ -22 +24 @@
-index e7a2ae933c..9ef9f13cbb 100644
+index 12db56f173..485afdf5ca 100644
@@ -25 +27 @@
-@@ -9470,12 +9470,10 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
+@@ -9223,12 +9223,10 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
@@ -38,2 +40,2 @@
-@@ -9527,24 +9525,11 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
- 			vni_v[i] = vxlan_m->hdr.vni[i] & vxlan_v->hdr.vni[i];
+@@ -9280,24 +9278,11 @@ flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
+ 			vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i];
@@ -45,3 +47,3 @@
- 	tunnel_v = (vxlan_v->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
- 		   (vxlan_v->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
- 		   (vxlan_v->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
+ 	tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
+ 		   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
+ 		   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
@@ -50,3 +52,3 @@
--		tunnel_v = (vxlan_vv->hdr.vni[0] & vxlan_m->hdr.vni[0]) |
--			   (vxlan_vv->hdr.vni[1] & vxlan_m->hdr.vni[1]) << 8 |
--			   (vxlan_vv->hdr.vni[2] & vxlan_m->hdr.vni[2]) << 16;
+-		tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) |
+-			   (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 |
+-			   (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16;
@@ -55,2 +57,2 @@
--		if (vxlan_vv->hdr.rsvd1 & vxlan_m->hdr.rsvd1)
--			*tunnel_header_v |= vxlan_v->hdr.rsvd1 << 24;
+-		if (vxlan_vv->rsvd1 & vxlan_m->rsvd1)
+-			*tunnel_header_v |= vxlan_v->rsvd1 << 24;
@@ -58 +60 @@
--		*tunnel_header_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24;
+-		*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
@@ -60 +62 @@
-+	tunnel_v |= (vxlan_v->hdr.rsvd1 & vxlan_m->hdr.rsvd1) << 24;
++	tunnel_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;


More information about the stable mailing list