[dpdk-stable] [PATCH] vdpa/mlx5: fix virtq cleaning

Maxime Coquelin maxime.coquelin at redhat.com
Wed Mar 24 11:38:56 CET 2021



On 3/1/21 11:41 AM, Matan Azrad wrote:
> The HW virtq object can be destroyed ether when the device is closed or

s/ether/either/

> when the state of the virtq becomes disabled.
> 
> Some parameters of the virtq should continue to be managed when the
> virtq state is changed but all of them must be initialized when the
> device is closed.
> 
> Wrongly, the enable parameter stayed on when the device is closed what
> might cause creation of invalid virtq in the next time a device is
> assigned to the driver.
> 
> Clean all the virtqs memory when the device is closed.
> 
> Fixes: c47d6e83334e ("vdpa/mlx5: support queue update")
> Cc: stable at dpdk.org
> 
> Signed-off-by: Matan Azrad <matan at nvidia.com>
> Acked-by: Xueming Li <xuemingl at nvidia.com>
> ---
>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> index ef2642a..024c5c4 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> @@ -103,13 +103,8 @@
>  	for (i = 0; i < priv->nr_virtqs; i++) {
>  		virtq = &priv->virtqs[i];
>  		mlx5_vdpa_virtq_unset(virtq);
> -		if (virtq->counters) {
> +		if (virtq->counters)
>  			claim_zero(mlx5_devx_cmd_destroy(virtq->counters));
> -			virtq->counters = NULL;
> -			memset(&virtq->reset, 0, sizeof(virtq->reset));
> -		}
> -		memset(virtq->err_time, 0, sizeof(virtq->err_time));
> -		virtq->n_retry = 0;
>  	}
>  	for (i = 0; i < priv->num_lag_ports; i++) {
>  		if (priv->tiss[i]) {
> @@ -126,6 +121,7 @@
>  		priv->virtq_db_addr = NULL;
>  	}
>  	priv->features = 0;
> +	memset(priv->virtqs, 0, sizeof(*virtq) * priv->nr_virtqs);
>  	priv->nr_virtqs = 0;
>  }
>  
> 

With typo fixed in commit message:

Reviewed-by: Maxime Coquelin <maxime.coquelin at redhat.com>

No need to resubmit, we can fix the typo while applying.

Thanks,
Maxime



More information about the stable mailing list