[dpdk-dev] [PATCH] net/mlx5: optimize tunnel offload index pool

Suanming Mou suanmingm at nvidia.com
Mon Dec 7 06:58:34 CET 2020


Currently, when creating the index pool, if the trunk size is not
configured, the index pool default trunk size will be 4096.

The maximum tunnel offload supported now is 256(MLX5_MAX_TUNNELS),
create the index pool with trunk size 4096 wastes the memory.

This commits changes the tunnel offload index pool trunk size to
MLX5_MAX_TUNNELS to save the memory.

Signed-off-by: Suanming Mou <suanmingm at nvidia.com>
Reviewed-by: Gregory Etelson <getelson at nvidia.com>
Acked-by: Matan Azrad <matan at nvidia.com>
---
 drivers/net/mlx5/mlx5.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index ca3667a..a742f4b 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -265,6 +265,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 	},
 	[MLX5_IPOOL_TUNNEL_ID] = {
 		.size = sizeof(struct mlx5_flow_tunnel),
+		.trunk_size = MLX5_MAX_TUNNELS,
 		.need_lock = 1,
 		.release_mem_en = 1,
 		.type = "mlx5_tunnel_offload",
-- 
1.8.3.1



More information about the dev mailing list