[dpdk-users] segmention fault while accessing mbuf

Alex Kiselev alex at therouter.net
Sun Jun 7 19:11:28 CEST 2020


On 2020-06-07 17:21, Cliff Burdick wrote:
> The mbuf pool said be configured to be the size of the largest packet
> you expect to receive. If you're getting packets longer than that, I
> would expect you to see problems. Same goes for transmitting; I
> believe it will just read past the end of the mbuf data.

I am using rte_eth_dev_set_mtu() call with mtu value that is consistent
with the mbuf size. Therefore I believe I don't have any overflow bugs 
in the
RX code.

And I've found a couple of bugs in the TX code. Both of them are
have to do with the incorrect use of pkt_len/data_len mbufs field.

But, the crash happened while receiving packets, that's why
I am wondering could the bugs I found in the TX code cause the crush
in RX?


> 
> On Sun, Jun 7, 2020, 06:36 Alex Kiselev <alex at therouter.net> wrote:
> 
>> On 2020-06-07 15:16, Cliff Burdick wrote:
>>> That shouldn't matter. The mbuf size is allocated when you create
>> the
>>> mempool, and data_len/pkt_len are just to specify the size of the
>>> total packet and each segment. The underlying storage size is
>> still
>>> the same.
>> 
>> It does matter. I've done some tests and after
>> sending a few mbufs with data_len/pkt_len bigger than the size
>> of mbuf's underlying buffer the app stops sending/receiving packets.
>> The PMD apparently goes beyong the mbuf's buffer, that's why
>> I sill think that my question about the impact of using incorrect
>> data_len/pkt is valid.
>> 
>>> 
>>> Have you checked to see if it's potentially a hugepage issue?
>> 
>> Please, explain.
>> 
>> The app had been working two monghts before the crush
>> and the load was 3-4 gbit/s, so no, I don't think that
>> something is wrong with hugepages on that machine.
>> 
>>> 
>>> On Sun, Jun 7, 2020, 02:59 Alex Kiselev <alex at therouter.net>
>> wrote:
>>> 
>>>> On 2020-06-07 04:41, Cliff Burdick wrote:
>>>>> I can't tell from your code, but you assigned nb_rx to the
>> number
>>>> of
>>>>> packets received, but then used vec_size, which might be larger.
>>>> Does
>>>>> this happen if you use nb_rx in your loops?
>>>> 
>>>> No, this doesn't happen.
>>>> I just skip the part of the code that translates nb_rx to
>> vec_size,
>>>> since that code is double checked.
>>>> 
>>>> My actual question now is about possible impact of using
>>>> incorrect values of mbuf's pkt_len and data_len fields.
>>>> 
>>>>> 
>>>>> On Sat, Jun 6, 2020 at 5:59 AM Alex Kiselev <alex at therouter.net>
>>>>> wrote:
>>>>> 
>>>>>>> 1 июня 2020 г., в 19:17, Stephen Hemminger
>>>>>> <stephen at networkplumber.org> написал(а):
>>>>>>> 
>>>>>>> On Mon, 01 Jun 2020 15:24:25 +0200
>>>>>>> Alex Kiselev <alex at therouter.net> wrote:
>>>>>>> 
>>>>>>>> Hello,
>>>>>>>> 
>>>>>>>> I've got a segmentation fault error in my data plane path.
>>>>>>>> I am pretty sure the code where the segfault happened is ok,
>>>>>>>> so my guess is that I somehow received a corrupted mbuf.
>>>>>>>> How could I troubleshoot this? Is there any way?
>>>>>>>> Is it possible that other threads of the application
>>>>>>>> corrupted that mbuf?
>>>>>>>> 
>>>>>>>> I would really appriciate any advice.
>>>>>>>> Thanks.
>>>>>>>> 
>>>>>>>> DPDK 18.11.3
>>>>>>>> NIC: 82599ES
>>>>>>>> 
>>>>>>>> Code:
>>>>>>>> 
>>>>>>>> nb_rx = rte_eth_rx_burst(port_id, queue_id, pkts_burst,
>>>>>>>> MAX_PKT_BURST);
>>>>>>>> 
>>>>>>>> ...
>>>>>>>> 
>>>>>>>> for (i=0; i < vec_size; i++) {
>>>>>>>> rte_prefetch0(rte_pktmbuf_mtod(m_v[i], void *));
>>>>>>>> 
>>>>>>>> for (i=0; i < vec_size; i++) {
>>>>>>>> m = m_v[i];
>>>>>>>> eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
>>>>>>>> eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
>>>>>> <---
>>>>>>>> Segmentation fault
>>>>>>>> ...
>>>>>>>> 
>>>>>>>> #0  rte_arch_bswap16 (_x=<error reading variable: Cannot
>> access
>>>>>> memory
>>>>>>>> at address 0x4d80000000053010>)
>>>>>>> 
>>>>>>> Build with as many of the debug options turned on in the DPDK
>>>>>> config,
>>>>>>> and build with EXTRA_CFLAGS of -g.
>>>>>> 
>>>>>> Could using an incorrect (a very big one) value of mbuf pkt_len
>>>> and
>>>>>> data_len while transmitting cause mbuf corruption and following
>>>>>> segmentation fault on rx?


More information about the users mailing list