[dpdk-users] RX of multi-segment jumbo frames

Wiles, Keith keith.wiles at intel.com
Fri Feb 15 14:30:30 CET 2019



> On Feb 14, 2019, at 11:59 PM, Filip Janiszewski <contact at filipjaniszewski.com> wrote:
> 
> Unfortunately I didn't get much help from the maintainers at Mellanox,
> but I discovered that with DPDK 18.05 there's the flag
> ignore_offload_bitfield which once toggled to 1 along with the offloads
> set to DEV_RX_OFFLOAD_JUMBO_FRAME|DEV_RX_OFFLOAD_SCATTER allows DPDK to
> capture Jumbo on Mellanox:
> 
> https://doc.dpdk.org/api-18.05/structrte__eth__rxmode.html
> 
> In DPDK 19.02 this flag is missing and I can't capture Jumbos with my
> current configuration.
> 
> Sadly, even if setting ignore_offload_bitfield to 1 fix my problem it
> creates a bunch more, the packets coming in are not timestamped for
> example (setting hw_timestamp to 1 does not fix the issue as the
> timestamp are still EPOCH + some ms.).
> 
> Not sure if this can trigger any idea, for me it is not completely clear
> what was the purpose of ignore_offload_bitfield (removed later) and how
> to enable Jumbos properly.
> 
> What I've attempted so far (apart from the ignore_offload_bitfield):
> 
> 1) Set mtu to 9600 (rte_eth_dev_set_mtu)
> 2) Configure port with offloads DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_JUMBO_FRAME, max_rx_pkt_len set to 9600
> 3) Configure RX queue with default_rxconf (from rte_eth_dev_info) adding
> the offloads from the port configuration (DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> 
> The JF are reported as ierror in rte_eth_stats.

sorry, the last time i had any dealings with mellanox i was not able to get it to work. so not going to be much help here.
> 
> Thanks
> 
> Il 09/02/19 16:36, Wiles, Keith ha scritto:
>> 
>> 
>>> On Feb 9, 2019, at 9:27 AM, Filip Janiszewski <contact at filipjaniszewski.com> wrote:
>>> 
>>> 
>>> 
>>> Il 09/02/19 14:51, Wiles, Keith ha scritto:
>>>> 
>>>> 
>>>>> On Feb 9, 2019, at 5:11 AM, Filip Janiszewski <contact at filipjaniszewski.com> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I'm attempting to receive jumbo frames (~9000 bytes) on a Mellonox card
>>>>> using DPDK, I've configured the DEV_RX_OFFLOAD_JUMBO_FRAME offload for
>>>>> rte_eth_conf and rte_eth_rxconf (per RX Queue), but I can capture jumbo
>>>>> frames only if the mbuf is large enough to contain the whole packet, is
>>>>> there a way to enable DPDK to chain the incoming data in mbufs smaller
>>>>> than the actual packet?
>>>>> 
>>>>> We don't have many of those big packets coming in, so would be optimal
>>>>> to leave the mbuf size to RTE_MBUF_DEFAULT_BUF_SIZE and then configure
>>>>> the RX device to chain those bufs for larger packets, but can't find a
>>>>> way to do it, any suggestion?
>>>>> 
>>>> 
>>>> the best i understand is the nic or pmd needs to be configured to split up packets between mbufs in the rx ring. i look in the docs for the nic and see if it supports splitting up packets or ask the maintainer from the maintainers file.
>>> 
>>> I can capture jumbo packets with Wireshark on the same card (same port,
>>> same setup), which let me think the problem is purely on my DPDK card
>>> configuration.
>>> 
>>> According to ethtools, the jumbo packet (from now on JF, Jumbo Frame) is
>>> detected at phy level, the couters rx_packets_phy, rx_bytes_phy,
>>> rx_8192_to_10239_bytes_phy are properly increased.
>>> 
>>> There was an option to setup manually the support for JF but was remove
>>> from DPDK after version 16.07: CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N.
>>> According to the release note:
>>> 
>>> .
>>> Improved jumbo frames support, by dynamically setting RX scatter gather
>>> elements according to the MTU and mbuf size, no need for compilation
>>> parameter ``MLX5_PMD_SGE_WR_N``
>>> .
>>> 
>>> Not quire sure where to look for..
>>> 
>> 
>> maintainer is your best bet now.
>>>>> Thanks
>>>>> 
>>>>> -- 
>>>>> BR, Filip
>>>>> +48 666 369 823
>>>> 
>>>> Regards,
>>>> Keith
>>>> 
>>> 
>>> -- 
>>> BR, Filip
>>> +48 666 369 823
>> 
>> Regards,
>> Keith
>> 
> 
> -- 
> BR, Filip
> +48 666 369 823

Regards,
Keith



More information about the users mailing list