Including contigmem in core dumps
Lewis Donzis
lew at perftech.com
Tue May 28 15:19:22 CEST 2024
----- On May 28, 2024, at 1:55 AM, Dmitry Kozlyuk dmitry.kozliuk at gmail.com wrote:
> Hi Lewis,
>
> Memory reserved by eal_get_virtual_area() is not yet useful,
> but it is very large, so by excluding it from dumps,
> DPDK prevents dumps from including large zero-filled parts.
>
> It also makes sense to call eal_mem_set_dump(..., false)
> from eal_memalloc.c:free_seg(), because of --huge-unlink=never:
> in this mode (Linux-only), freed segments are not cleared,
> so if they were included into dump, it would be a lot of garbage data.
>
> Newly allocated hugepages are not included into dumps
> because this would make dumps very large by default.
> However, this could be an opt-in as a runtime option if need be.
Thanks for the clarification. I agree that not including freed segments makes perfect sense.
But when debugging a core dump, it's sometimes really helpful to be able to see what's in the mbuf that was being processed at the time. Perhaps it would be a useful option to be able to tell the allocator not to disable core dumps.
In the mean time, my experiments to get around this have not been fruitful.
I wondered if we could enable core dumps for mbufs by calling rte_mempool_mem_iter() on the pool returned by rte_pktmbuf_pool_create(), and have the callback function call madvise(memhdr->addr, memhdr->len, MADV_CORE). But that didn't help, or at least the size of the core file didn't increase.
I then tried disabling the call to madvise() in the DPDK source code, and that didn't make any difference either.
Note that this is on FreeBSD, so I wonder if there's some fundamental reason that the contigmem memory doesn't get included in a core dump?
More information about the dev
mailing list