[PATCH] net/mlx5: increase number of supported DV sub-flows
Gregory Etelson
getelson at nvidia.com
Sun Oct 27 14:25:53 CET 2024
Testpmd example that could not work with existing number of DV
sub-flows:
dpdk-testpmd -a PCI,dv_xmeta_en=1,l3_vxlan_en=1,dv_flow_en=1 -- \
-i --nb-cores=4 --rxq=5 --txq=5
set sample_actions 1 mark id 43704 / \
rss queues 3 0 1 1 end types ipv4 ipv4-other udp tcp ipv4-udp end / \
end
flow create 0 priority 15 group 271 ingress \
pattern mark id spec 16777184 id mask 0xffffff / end \
actions sample ratio 1 index 1 / queue index 0 / end
Increase number of supported DV sub-flows to 64
Signed-off-by: Gregory Etelson <getelson at nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski at nvidia.com>
---
drivers/net/mlx5/mlx5_flow.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index db56ae051d..9a8eccdd25 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -974,7 +974,7 @@ struct mlx5_flow_verbs_workspace {
#define MLX5_SCALE_JUMP_FLOW_GROUP_BIT 1
/** Maximal number of device sub-flows supported. */
-#define MLX5_NUM_MAX_DEV_FLOWS 32
+#define MLX5_NUM_MAX_DEV_FLOWS 64
/**
* tunnel offload rules type
--
2.43.0
More information about the dev
mailing list