[PATCH] net/mlx5: fix partial inline of fine grain packets

Dariusz Sosnowski dsosnowski at nvidia.com
Wed Nov 17 10:50:50 CET 2021


Assuming a user tried to send multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, using a device with
minimum inlining requirements (such as ConnectX-4 Lx or when user
specified them explicitly), sending such packets caused segfault.
Segfault was caused by failed invariants in
mlx5_tx_packet_multi_inline function.

This patch introduces a logic for multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, to omit mbuf scanning for
filling inline buffer and inline only minimal amount of data required.

Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first segments")
Cc: viacheslavo at nvidia.com
Cc: stable at dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski at nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo at nvidia.com>
---
 drivers/net/mlx5/mlx5_tx.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index ad13b5e608..bc629983fa 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -1933,7 +1933,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(txq->inlen_mode >=
 				    MLX5_ESEG_MIN_INLINE_SIZE);
 			MLX5_ASSERT(txq->inlen_mode <= txq->inlen_send);
-			inlen = txq->inlen_mode;
+			inlen = RTE_MIN(txq->inlen_mode, inlen);
 		} else if (vlan && !txq->vlan_en) {
 			/*
 			 * VLAN insertion is requested and hardware does not
@@ -1946,6 +1946,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		} else {
 			goto do_first;
 		}
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
+			goto do_build;
 		/*
 		 * Now we know the minimal amount of data is requested
 		 * to inline. Check whether we should inline the buffers
@@ -1978,6 +1980,8 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 				mbuf = NEXT(mbuf);
 				/* There should be not end of packet. */
 				MLX5_ASSERT(mbuf);
+				if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
+					break;
 				nxlen = inlen + rte_pktmbuf_data_len(mbuf);
 			} while (unlikely(nxlen < txq->inlen_send));
 		}
@@ -2005,6 +2009,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 	 * Estimate the number of Data Segments conservatively,
 	 * supposing no any mbufs is being freed during inlining.
 	 */
+do_build:
 	MLX5_ASSERT(inlen <= txq->inlen_send);
 	ds = NB_SEGS(loc->mbuf) + 2 + (inlen -
 				       MLX5_ESEG_MIN_INLINE_SIZE +
-- 
2.25.1



More information about the dev mailing list