[dpdk-dev] [RFC] net/mlx5: add support of LRO in MLX5 PMD

Dekel Peled dekelp at mellanox.com
Thu May 16 14:13:12 CEST 2019

LRO (Large Receive Offload) is intended to reduce host CPU overhead when processing Rx TCP packets.
LRO works by aggregating multiple incoming packets from a single stream into a larger buffer, before they
are passed higher up the networking stack. Thus reducing the number of packets that have to be processed.

MLX5 PMD will query the HCA capabilities on initialization to check if LRO is supported and can be used.
LRO in MLX5 PMD is intended for use by applications using a relatively small number of flows.
LRO support can be enabled per port.
Multiple simultaneous LRO sessions will be supported, using multiple RQs per DPDK Rx queue.
In each LRO session, packets of the same flow will be coalesced until one of the following occur:

  *   Buffer size limit is exceeded.
  *   Session timeout is exceeded.
  *   Packet from a different flow is received on the same queue.
When LRO session ends the coalesced packet is passed to the PMD, which will update the header fields
before passing the packet to the application.
For efficient memory utilization, packets from all RQs will be stored in a single RMP per DPDK Rx queue,
utilizing MPRQ mechanism.
Support of Non-LRO flows will not be impacted.

Existing API:
Offload capability DEV_RX_OFFLOAD_TCP_LRO will be used to indicate device supports LRO.
testpmd command-line option "-enable-lro" will be used to request LRO feature enable on application start.
testpmd rx_offload "tcp_lro" on or off will be used to request LRO feature enable or disable during application runtime.
Offload flag PKT_RX_LRO will be used. This flag can be set in Rx mbuf to indicate this is a LRO coalesced packet.

New API:
PMD configuration parameter lro_timeout_usec will be added.
This parameter can be used by application to select LRO session timeout (in microseconds).
If this value is not specified, the minimal value supported by device will be used.

Comments are welcome.

Signed-off-by: Dekel Peled <dekelp at mellanox.com<mailto:dekelp at mellanox.com>>

More information about the dev mailing list