[dpdk-dev] [PATCH] net/mlx5: fix memory region cache init
Yongseok Koh
yskoh at mellanox.com
Fri May 25 12:19:43 CEST 2018
> On May 24, 2018, at 11:35 PM, Xueming Li <xuemingl at mellanox.com> wrote:
>
> This patch moved MR cache init from device configuration function to
> probe function to make sure init only once.
>
> Fixes: 974f1e7ef146 ("net/mlx5: add new memory region support")
> Cc: yskoh at mellanox.com
>
> Signed-off-by: Xueming Li <xuemingl at mellanox.com>
> ---
> drivers/net/mlx5/mlx5.c | 11 +++++++++++
> drivers/net/mlx5/mlx5_ethdev.c | 11 -----------
> drivers/net/mlx5/mlx5_mr.c | 1 +
> 3 files changed, 12 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
> index dae847493..77ed8e01f 100644
> --- a/drivers/net/mlx5/mlx5.c
> +++ b/drivers/net/mlx5/mlx5.c
> @@ -1193,6 +1193,17 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> goto port_error;
> }
> priv->config.max_verbs_prio = verb_priorities;
> + /*
> + * Once the device is added to the list of memory event
> + * callback, its global MR cache table cannot be expanded
> + * on the fly because of deadlock. If it overflows, lookup
> + * should be done by searching MR list linearly, which is slow.
> + */
> + err = -mlx5_mr_btree_init(&priv->mr.cache,
> + MLX5_MR_BTREE_CACHE_N * 2,
> + eth_dev->device->numa_node);
> + if (err)
> + goto port_error;
A nit.
Like mlx5_flow_create_drop_queue(), please store rte_errno to err (err = rte_errno;)
instead of putting a minus sign to the function.
With that being fixed, you can put my acked-by tag when you submit v2.
Thanks,
Yongseok
> /* Add device to memory callback list. */
> rte_rwlock_write_lock(&mlx5_shared_data->mem_event_rwlock);
> LIST_INSERT_HEAD(&mlx5_shared_data->mem_event_cb_list,
> diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
> index f6cebae41..90488af33 100644
> --- a/drivers/net/mlx5/mlx5_ethdev.c
> +++ b/drivers/net/mlx5/mlx5_ethdev.c
> @@ -392,17 +392,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
> if (++j == rxqs_n)
> j = 0;
> }
> - /*
> - * Once the device is added to the list of memory event callback, its
> - * global MR cache table cannot be expanded on the fly because of
> - * deadlock. If it overflows, lookup should be done by searching MR list
> - * linearly, which is slow.
> - */
> - if (mlx5_mr_btree_init(&priv->mr.cache, MLX5_MR_BTREE_CACHE_N * 2,
> - dev->device->numa_node)) {
> - /* rte_errno is already set. */
> - return -rte_errno;
> - }
> return 0;
> }
>
> diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c
> index abb1f5179..08105a443 100644
> --- a/drivers/net/mlx5/mlx5_mr.c
> +++ b/drivers/net/mlx5/mlx5_mr.c
> @@ -191,6 +191,7 @@ mlx5_mr_btree_init(struct mlx5_mr_btree *bt, int n, int socket)
> rte_errno = EINVAL;
> return -rte_errno;
> }
> + assert(!bt->table && !bt->size);
> memset(bt, 0, sizeof(*bt));
> bt->table = rte_calloc_socket("B-tree table",
> n, sizeof(struct mlx5_mr_cache),
> --
> 2.13.3
>
More information about the dev
mailing list