[dpdk-dev] [PATCH] net/mlx5: fix multiple flow table hash list

Raslan Darawsheh rasland at mellanox.com
Tue Dec 17 13:13:40 CET 2019


Hi,

> -----Original Message-----
> From: dev <dev-bounces at dpdk.org> On Behalf Of Xiaoyu Min
> Sent: Monday, December 16, 2019 11:28 AM
> To: Ori Kam <orika at mellanox.com>; Matan Azrad <matan at mellanox.com>;
> Shahaf Shuler <shahafs at mellanox.com>; Slava Ovsiienko
> <viacheslavo at mellanox.com>
> Cc: dev at dpdk.org; stable at dpdk.org; Zhike Wang <wangzhike at jd.com>
> Subject: [dpdk-dev] [PATCH] net/mlx5: fix multiple flow table hash list
> 
> The eth devices which share one ibv device only need one hash list of flow
> table.
> 
> Currently, flow table hash list is created per each eth device whatever
> whether they share one ibv device or not.
> 
> If the devices share one ibv device, the previously created hash list will
> become dangle because the pointer point to (sh->flow_tbls) is overwritten
> by the later created hast list.
> 
> To fix this, just don't create hash list if it is already created.
> 
> Fixes: 54534725d2f3 ("net/mlx5: fix flow table hash list conversion")
> Cc: stable at dpdk.org
> 
> Reported-by: Zhike Wang <wangzhike at jd.com>
> Signed-off-by: Xiaoyu Min <jackmin at mellanox.com>
> ---
>  drivers/net/mlx5/mlx5.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index
> d84a6f91b4..50960c91ce 100644
> --- a/drivers/net/mlx5/mlx5.c
> +++ b/drivers/net/mlx5/mlx5.c
> @@ -868,8 +868,13 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)  {
>  	struct mlx5_ibv_shared *sh = priv->sh;
>  	char s[MLX5_HLIST_NAMESIZE];
> -	int err = mlx5_alloc_table_hash_list(priv);
> +	int err = 0;
> 
> +	if (!sh->flow_tbls)
> +		err = mlx5_alloc_table_hash_list(priv);
> +	else
> +		DRV_LOG(DEBUG, "sh->flow_tbls[%p] already created,
> reuse\n",
> +			(void *)sh->flow_tbls);
>  	if (err)
>  		return err;
>  	/* Create tags hash list table. */
> --
> 2.24.0


Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh


More information about the dev mailing list