[dpdk-dev] [PATCH v2 4/4] net/ixgbe: delete HW rings when releasing queues
Wang, Haiyue
haiyue.wang at intel.com
Wed Sep 22 08:22:45 CEST 2021
> -----Original Message-----
> From: Yunjian Wang <wangyunjian at huawei.com>
> Sent: Saturday, September 18, 2021 16:42
> To: dev at dpdk.org
> Cc: Wang, Haiyue <haiyue.wang at intel.com>; Xing, Beilei <beilei.xing at intel.com>; Yang, Qiming
> <qiming.yang at intel.com>; Zhang, Qi Z <qi.z.zhang at intel.com>; dingxiaoxiong at huawei.com; Yunjian Wang
> <wangyunjian at huawei.com>
> Subject: [dpdk-dev] [PATCH v2 4/4] net/ixgbe: delete HW rings when releasing queues
>
> Normally when closing the device the queue memzone should be
> freed. But the memzone will be not freed, when device setup
> ops like:
>
> rte_eth_bond_slave_remove
> -->__eth_bond_slave_remove_lock_free
> ---->slave_remove
> ------>rte_eth_dev_internal_reset
> -------->rte_eth_dev_rx_queue_config
> ---------->eth_dev_rx_queue_config
> ------------>ixgbe_dev_rx_queue_release
> rte_eth_dev_close
> -->ixgbe_dev_close
> ---->ixgbe_dev_free_queues
> ------>ixgbe_dev_rx_queue_release
> (not been called due to nb_rx_queues and nb_tx_queues are 0)
>
> In order to free the memzone, we can release the memzone
> when releasing queues.
>
After re-check the eth dev API, I think we can simplify the commit
message to such as:
Fix memzone leak when re-configure the RX/TX queues.
Please see 'rte_eth_dev_configure', when queue number is changed to
small size, the BIG memzone queue index will be lost. This will make
it is a MUST fix. ;-)
And add the Fixes tag and CC to stable.
What do you think ?
> Signed-off-by: Yunjian Wang <wangyunjian at huawei.com>
> ---
> drivers/net/ixgbe/ixgbe_rxtx.c | 6 ++++--
> drivers/net/ixgbe/ixgbe_rxtx.h | 2 ++
> 2 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index bfdfd5e755..1b6e0489f4 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -2482,6 +2482,7 @@ ixgbe_tx_queue_release(struct ixgbe_tx_queue *txq)
> if (txq != NULL && txq->ops != NULL) {
> txq->ops->release_mbufs(txq);
> txq->ops->free_swring(txq);
> + rte_memzone_free(txq->mz);
> rte_free(txq);
> }
> }
> @@ -2763,6 +2764,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
> return -ENOMEM;
> }
>
> + txq->mz = tz;
> txq->nb_tx_desc = nb_desc;
> txq->tx_rs_thresh = tx_rs_thresh;
> txq->tx_free_thresh = tx_free_thresh;
> @@ -2887,6 +2889,7 @@ ixgbe_rx_queue_release(struct ixgbe_rx_queue *rxq)
> ixgbe_rx_queue_release_mbufs(rxq);
> rte_free(rxq->sw_ring);
> rte_free(rxq->sw_sc_ring);
> + rte_memzone_free(rxq->mz);
> rte_free(rxq);
> }
> }
> @@ -3162,6 +3165,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> return -ENOMEM;
> }
>
> + rxq->mz = rz;
> /*
> * Zero init all the descriptors in the ring.
> */
> @@ -3433,14 +3437,12 @@ ixgbe_dev_free_queues(struct rte_eth_dev *dev)
> for (i = 0; i < dev->data->nb_rx_queues; i++) {
> ixgbe_dev_rx_queue_release(dev->data->rx_queues[i]);
> dev->data->rx_queues[i] = NULL;
> - rte_eth_dma_zone_free(dev, "rx_ring", i);
> }
> dev->data->nb_rx_queues = 0;
>
> for (i = 0; i < dev->data->nb_tx_queues; i++) {
> ixgbe_dev_tx_queue_release(dev->data->tx_queues[i]);
> dev->data->tx_queues[i] = NULL;
> - rte_eth_dma_zone_free(dev, "tx_ring", i);
> }
> dev->data->nb_tx_queues = 0;
> }
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
> index 476ef62cfd..a1764f2b08 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.h
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.h
> @@ -138,6 +138,7 @@ struct ixgbe_rx_queue {
> struct rte_mbuf fake_mbuf;
> /** hold packets to return to application */
> struct rte_mbuf *rx_stage[RTE_PMD_IXGBE_RX_MAX_BURST*2];
> + const struct rte_memzone *mz;
> };
>
> /**
> @@ -236,6 +237,7 @@ struct ixgbe_tx_queue {
> uint8_t using_ipsec;
> /**< indicates that IPsec TX feature is in use */
> #endif
> + const struct rte_memzone *mz;
> };
>
> struct ixgbe_txq_ops {
> --
> 2.23.0
More information about the dev
mailing list