[dpdk-users] RX of multi-segment jumbo frames
keith.wiles at intel.com
Sat Feb 9 16:36:04 CET 2019
> On Feb 9, 2019, at 9:27 AM, Filip Janiszewski <contact at filipjaniszewski.com> wrote:
> Il 09/02/19 14:51, Wiles, Keith ha scritto:
>>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski <contact at filipjaniszewski.com> wrote:
>>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
>>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
>>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
>>> frames only if the mbuf is large enough to contain the whole packet, is
>>> there a way to enable DPDK to chain the incoming data in mbufs smaller
>>> than the actual packet?
>>> We don't have many of those big packets coming in, so would be optimal
>>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
>>> the RX device to chain those bufs for larger packets, but can't find a
>>> way to do it, any suggestion?
>> the best i understand is the nic or pmd needs to be configured to split up packets between mbufs in the rx ring. i look in the docs for the nic and see if it supports splitting up packets or ask the maintainer from the maintainers file.
> I can capture jumbo packets with Wireshark on the same card (same port,
> same setup), which let me think the problem is purely on my DPDK card
> According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) is
> detected at phy level, the couters rx_packets_phy, rx_bytes_phy,
> rx_8192_to_10239_bytes_phy are properly increased.
> There was an option to setup manually the support for JF but was remove
> from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N.
> According to the release note:
> Improved jumbo frames support, by dynamically setting RX scatter gather
> elements according to the MTU and mbuf size, no need for compilation
> parameter ``MLX5_PMD_SGE_WR_N``
> Not quire sure where to look for..
maintainer is your best bet now.
>>> BR, Filip
>>> +48 666 369 823
> BR, Filip
> +48 666 369 823
More information about the users