[dpdk-stable] patch 'net/mlx5: allow pattern start from IP' has been queued to LTS release 18.11.6

Kevin Traynor ktraynor at redhat.com
Wed Dec 11 22:26:05 CET 2019


Hi,

FYI, your patch has been queued to LTS release 18.11.6

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/17/19. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/kevintraynor/dpdk-stable-queue

This queued commit can be viewed at:
https://github.com/kevintraynor/dpdk-stable-queue/commit/7ee4881da1851b7dce7ad9e2cc041f14a9ab7130

Thanks.

Kevin.

---
>From 7ee4881da1851b7dce7ad9e2cc041f14a9ab7130 Mon Sep 17 00:00:00 2001
From: Xiaoyu Min <jackmin at mellanox.com>
Date: Tue, 5 Nov 2019 10:03:09 +0200
Subject: [PATCH] net/mlx5: allow pattern start from IP

[ upstream commit 0be2fba2f07d91aa7436fcf452aaff05ff5c6a62 ]

Some applications, i.e. OVS, have rule like:

[1] pattern ipv4 / end actions ...

which intends to match ipv4 only on non-vlan ethernet and MLX5 NIC
supports this.

So PMD should accept this.

Fixes: 906a2efae8da ("net/mlx5: validate flow rule item order")

Signed-off-by: Xiaoyu Min <jackmin at mellanox.com>
Acked-by: Ori Kam <orika at mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo at mellanox.com>
---
 drivers/net/mlx5/mlx5_flow.c | 28 ++++++++++------------------
 1 file changed, 10 insertions(+), 18 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index cf9cdcfe3..2bf535213 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1069,9 +1069,15 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "multiple L2 layers not supported");
-	if (tunnel && (item_flags & MLX5_FLOW_LAYER_INNER_L3))
+	if ((!tunnel && (item_flags & MLX5_FLOW_LAYER_OUTER_L3)) ||
+	    (tunnel && (item_flags & MLX5_FLOW_LAYER_INNER_L3)))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
-					  "inner L2 layer should not "
-					  "follow inner L3 layers");
+					  "L2 layer should not follow "
+					  "L3 layers");
+	if ((!tunnel && (item_flags & MLX5_FLOW_LAYER_OUTER_VLAN)) ||
+	    (tunnel && (item_flags & MLX5_FLOW_LAYER_INNER_VLAN)))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "L2 layer should not follow VLAN");
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
@@ -1117,6 +1123,4 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 					MLX5_FLOW_LAYER_OUTER_VLAN;
 
-	const uint64_t l2m = tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
-				      MLX5_FLOW_LAYER_OUTER_L2;
 	if (item_flags & vlanm)
 		return rte_flow_error_set(error, EINVAL,
@@ -1126,9 +1130,5 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
-					  "L2 layer cannot follow L3/L4 layer");
-	else if ((item_flags & l2m) == 0)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ITEM, item,
-					  "no L2 layer before VLAN");
+					  "VLAN cannot follow L3/L4 layer");
 	if (!mask)
 		mask = &rte_flow_item_vlan_mask;
@@ -1197,8 +1197,4 @@ mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "L3 cannot follow an L4 layer.");
-	else if (!tunnel && !(item_flags & MLX5_FLOW_LAYER_OUTER_L2))
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ITEM, item,
-					  "no L2 layer before IPV4");
 	if (!mask)
 		mask = &rte_flow_item_ipv4_mask;
@@ -1265,8 +1261,4 @@ mlx5_flow_validate_item_ipv6(const struct rte_flow_item *item,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "L3 cannot follow an L4 layer.");
-	else if (!tunnel && !(item_flags & MLX5_FLOW_LAYER_OUTER_L2))
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ITEM, item,
-					  "no L2 layer before IPV6");
 	if (!mask)
 		mask = &rte_flow_item_ipv6_mask;
-- 
2.21.0

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2019-12-11 21:24:13.796624458 +0000
+++ 0013-net-mlx5-allow-pattern-start-from-IP.patch	2019-12-11 21:24:12.594652726 +0000
@@ -1 +1 @@
-From 0be2fba2f07d91aa7436fcf452aaff05ff5c6a62 Mon Sep 17 00:00:00 2001
+From 7ee4881da1851b7dce7ad9e2cc041f14a9ab7130 Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit 0be2fba2f07d91aa7436fcf452aaff05ff5c6a62 ]
+
@@ -16 +17,0 @@
-Cc: stable at dpdk.org
@@ -26 +27 @@
-index 54f4cfe04..e90301ccd 100644
+index cf9cdcfe3..2bf535213 100644
@@ -29 +30 @@
-@@ -1277,9 +1277,15 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
+@@ -1069,9 +1069,15 @@ mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
@@ -48 +49 @@
-@@ -1328,6 +1334,4 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
+@@ -1117,6 +1123,4 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
@@ -55 +56 @@
-@@ -1337,9 +1341,5 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
+@@ -1126,9 +1130,5 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
@@ -66 +67 @@
-@@ -1465,8 +1465,4 @@ mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item,
+@@ -1197,8 +1197,4 @@ mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item,
@@ -68 +69 @@
- 					  "L3 cannot follow an NVGRE layer.");
+ 					  "L3 cannot follow an L4 layer.");
@@ -75 +76 @@
-@@ -1571,8 +1567,4 @@ mlx5_flow_validate_item_ipv6(const struct rte_flow_item *item,
+@@ -1265,8 +1261,4 @@ mlx5_flow_validate_item_ipv6(const struct rte_flow_item *item,
@@ -77 +78 @@
- 					  "L3 cannot follow an NVGRE layer.");
+ 					  "L3 cannot follow an L4 layer.");



More information about the stable mailing list