[PATCH v2] net/mlx5: mitigate the Tx queue parameter adjustment
Raslan Darawsheh
rasland at nvidia.com
Mon May 12 08:24:06 CEST 2025
Hi,
On 24/04/2025 4:31 PM, Viacheslav Ovsiienko wrote:
> he DPDK API rte_eth_tx_queue_setup() has a parameter nb_tx_desc
> specifying the desired queue capacity, measured in packets.
>
> The ConnectX NIC series has a hardware-imposed queue size
> limit of 32K WQEs (packet hardware descriptors). Typically,
> one packet requires one WQE to be sent.
>
> There is a special offload option, data-inlining, to improve
> performance for small packets. Also, NICs in some configurations
> require a minimum amount of inline data for the steering engine
> to operate correctly.
>
> In the case of inline data, more than one WQEs might be required
> to send a single packet. The mlx5 PMD takes this into account
> and adjusts the number of queue WQEs accordingly.
>
> If the requested queue capacity can't be satisfied due to
> the hardware queue size limit, the mlx5 PMD rejected the queue
> creation, causing unresolvable application failure.
>
> The patch provides the following:
>
> - fixes the calculation of the number of required WQEs
> to send a single packet with inline data, making it more precise
> and extending the painless operating range.
>
> - If the requested queue capacity can't be satisfied due to WQE
> number adjustment for inline data, it no longer causes a severe
> error. Instead, a warning message is emitted, and the queue
> is created with the maximum available size, with a reported success.
>
> Please note that the inline data size depends on many options
> (NIC configuration, queue offload flags, packet offload flags,
> packet size, etc.), so the actual queue capacity might not be
> impacted at all.
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo at nvidia.com>
> Acked-by: Dariusz Sosnowski <dsosnowski at nvidia.com>
> Patch applied to next-net-mlx,
--
Kindest regards
Raslan Darawsheh
More information about the dev
mailing list