[PATCH 2/4] common/mlx5: release unused mempool entries
Bing Zhao
bingz at nvidia.com
Wed Nov 12 08:41:31 CET 2025
From: Roi Dayan <roid at nvidia.com>
When creating a new mempool but assigning a shared entries
from a different mempool need to release the newly unused
allocated entries. Fix it.
Fixes: 8947eebc999e ("common/mlx5: fix shared memory region ranges allocation")
Cc: bingz at nvidia.com
Signed-off-by: Roi Dayan <roid at nvidia.com>
Signed-off-by: Gregory Etelson <getelson at nvidia.com>
---
drivers/common/mlx5/mlx5_common_mr.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c
index c41ffff2d5..8ed988dec9 100644
--- a/drivers/common/mlx5/mlx5_common_mr.c
+++ b/drivers/common/mlx5/mlx5_common_mr.c
@@ -1717,18 +1717,24 @@ mlx5_mr_mempool_register_primary(struct mlx5_mr_share_cache *share_cache,
* hugepage can be shared across mempools that also fit in it.
*/
if (share_hugepage) {
+ struct mlx5_mempool_mr *gc_mrs = NULL;
+
rte_rwlock_write_lock(&share_cache->rwlock);
LIST_FOREACH(mpr, &share_cache->mempool_reg_list, next) {
if (mpr->mrs[0].pmd_mr.addr == (void *)ranges[0].start)
break;
}
if (mpr != NULL) {
+ /* Releasing MRs here can create a dead-lock on share_cache->rwlock */
+ gc_mrs = new_mpr->mrs;
new_mpr->mrs = mpr->mrs;
mlx5_mempool_reg_attach(new_mpr);
LIST_INSERT_HEAD(&share_cache->mempool_reg_list,
new_mpr, next);
}
rte_rwlock_write_unlock(&share_cache->rwlock);
+ if (gc_mrs != NULL)
+ mlx5_free(gc_mrs);
if (mpr != NULL) {
DRV_LOG(DEBUG, "Shared MR %#x in PD %p for mempool %s with mempool %s",
mpr->mrs[0].pmd_mr.lkey, pd, mp->name,
--
2.34.1
More information about the dev
mailing list