[dpdk-dev] [PATCH v3] net/mlx5: return EAGAIN on premature disable interrupt calls

Ophir Munk ophirmu at mellanox.com
Tue Jul 21 16:41:07 CEST 2020


RXQ interrupts under Linux are based on the epoll mechanism. An
expected order of operations is as follows:
1. Call rte_eth_dev_rx_intr_enable(), to arm the CQ for receiving events
on data input.
2. Block on rte_epoll_wait() with an array of file descriptors
representing the CQ events. Upon data arrival the kernel will signal an
input event on the corresponding CQ fd.
3. Call rte_eth_dev_rx_intr_disable() after the event was received and
continue in polling mode. The mlx5 implementation of
rte_eth_dev_rx_intr_disable() is to get the CQ event and ack it.

In practice applications may wake up from rte_epoll_wait() due to
timeout with no event to ack but still call
rte_eth_dev_rx_intr_disable() unconditionally.  In such cases the call
should return EAGAIN (since the file descriptors are non-blocked), as
opposed to EINVAL which indicates a real failure.  In case of EAGAIN the
PMD should not warn on "Unable to disable interrupt on Rx queue".

This commit fixes a earlier commit where the returned value 0 from
function devx_get_event() - was considered an error.

Fixes: 19e429e5c7c2 ("net/mlx5: implement CQ for RxQ using DevX API")

Signed-off-by: Ophir Munk <ophirmu at mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo at mellanox.com>
Acked-by: Raslan Darawsheh <rasland at mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e6dc5ac..c78e522 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1188,10 +1188,8 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_IBV) {
 		ret = mlx5_glue->get_cq_event(rxq_obj->ibv_channel, &ev_cq,
 					      &ev_ctx);
-		if (ret || ev_cq != rxq_obj->ibv_cq) {
-			rte_errno = EINVAL;
+		if (ret < 0 || ev_cq != rxq_obj->ibv_cq)
 			goto exit;
-		}
 		mlx5_glue->ack_cq_events(rxq_obj->ibv_cq, 1);
 	} else if (rxq_obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ) {
 #ifdef HAVE_IBV_DEVX_EVENT
@@ -1200,22 +1198,29 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		ret = mlx5_glue->devx_get_event
 				(rxq_obj->devx_channel, event_data,
 				 sizeof(struct mlx5dv_devx_async_event_hdr));
-		if (ret <= 0 || event_data->cookie !=
-				(uint64_t)(uintptr_t)rxq_obj->devx_cq) {
-			rte_errno = EINVAL;
+		if (ret < 0 || event_data->cookie !=
+				(uint64_t)(uintptr_t)rxq_obj->devx_cq)
 			goto exit;
-		}
 #endif /* HAVE_IBV_DEVX_EVENT */
 	}
 	rxq_data->cq_arm_sn++;
 	mlx5_rxq_obj_release(rxq_obj);
 	return 0;
 exit:
+	/**
+	 * For ret < 0 save the errno (may be EAGAIN which means the get_event
+	 * function was called before receiving one).
+	 */
+	if (ret < 0)
+		rte_errno = errno;
+	else
+		rte_errno = EINVAL;
 	ret = rte_errno; /* Save rte_errno before cleanup. */
 	if (rxq_obj)
 		mlx5_rxq_obj_release(rxq_obj);
-	DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d",
-		dev->data->port_id, rx_queue_id);
+	if (ret != EAGAIN)
+		DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d",
+			dev->data->port_id, rx_queue_id);
 	rte_errno = ret; /* Restore rte_errno. */
 	return -rte_errno;
 }
-- 
2.8.4



More information about the dev mailing list