[dpdk-dev] [PATCH v3 0/6] use IOVAs check based on DMA mask

Alejandro Lucero alejandro.lucero at netronome.com
Mon Oct 29 12:39:14 CET 2018


I got a patch that solves a bug when calling rte_eal_dma_mask using the
mask instead of the maskbits. However, this does not solves the deadlock.

Interestingly, the problem looks like a compiler one. Calling
rte_memseg_walk does not return when calling inside rt_eal_dma_mask, but if
you modify the call like this:

*diff --git a/lib/librte_eal/common/eal_common_memory.c
b/lib/librte_eal/common/eal_common_memory.c*

*index 12dcedf5c..69b26e464 100644*

*--- a/lib/librte_eal/common/eal_common_memory.c*

*+++ b/lib/librte_eal/common/eal_common_memory.c*

@@ -462,7 +462,7 @@ rte_eal_check_dma_mask(uint8_t maskbits)

        /* create dma mask */

        mask = ~((1ULL << maskbits) - 1);



-       if (rte_memseg_walk(check_iova, &mask))

+       if (!rte_memseg_walk(check_iova, &mask))

                /*

                 * Dma mask precludes hugepage usage.

                 * This device can not be used and we do not need to keep


it works, although the value returned to the invoker changes, of course.
But the point here is it should be the same behaviour when calling
rte_memseg_walk than before and it is not.


Anatoly, maybe you can see something I can not.



On Mon, Oct 29, 2018 at 10:15 AM Alejandro Lucero <
alejandro.lucero at netronome.com> wrote:

> Apologies. Forget my previous email. Just using the wrong repo.
>
> Looking at solving this asap.
>
> On Mon, Oct 29, 2018 at 10:11 AM Alejandro Lucero <
> alejandro.lucero at netronome.com> wrote:
>
>> I know what is going on.
>>
>> In patchset version 3 I forgot to remove an old code. Anatoly spotted
>> that and I was going to send another version for fixing it. Before sending
>> the new version I saw that report about a problem with dma_mask and I'm
>> afraid I did not send another version with the fix ...
>>
>> Yao, can you try with next patch?:
>>
>> *diff --git a/lib/librte_eal/common/eal_common_memory.c
>> b/lib/librte_eal/common/eal_common_memory.c*
>>
>> *index ef656bbad..26adf46c0 100644*
>>
>> *--- a/lib/librte_eal/common/eal_common_memory.c*
>>
>> *+++ b/lib/librte_eal/common/eal_common_memory.c*
>>
>> @@ -458,10 +458,6 @@ rte_eal_check_dma_mask(uint8_t maskbits)
>>
>>                 return -1;
>>
>>         }
>>
>>
>>
>> -       /* keep the more restricted maskbit */
>>
>> -       if (!mcfg->dma_maskbits || maskbits < mcfg->dma_maskbits)
>>
>> -               mcfg->dma_maskbits = maskbits;
>>
>> -
>>
>>         /* create dma mask */
>>
>>         mask = ~((1ULL << maskbits) - 1);
>>
>> On Mon, Oct 29, 2018 at 9:48 AM Thomas Monjalon <thomas at monjalon.net>
>> wrote:
>>
>>> 29/10/2018 10:36, Yao, Lei A:
>>> > From: Thomas Monjalon [mailto:thomas at monjalon.net]
>>> > > 29/10/2018 09:23, Yao, Lei A:
>>> > > > Hi, Lucero, Thomas
>>> > > >
>>> > > > This patch set will cause deadlock during memory initialization.
>>> > > > rte_memseg_walk and try_expand_heap both will lock
>>> > > > the file &mcfg->memory_hotplug_lock. So dead lock will occur.
>>> > > >
>>> > > > #0       rte_memseg_walk
>>> > > > #1  <-rte_eal_check_dma_mask
>>> > > > #2  <-alloc_pages_on_heap
>>> > > > #3  <-try_expand_heap_primary
>>> > > > #4  <-try_expand_heap
>>> > > >
>>> > > > Log as following:
>>> > > > EAL: TSC frequency is ~2494156 KHz
>>> > > > EAL: Master lcore 0 is ready (tid=7ffff7fe3c00;cpuset=[0])
>>> > > > [New Thread 0x7ffff5e0d700 (LWP 330350)]
>>> > > > EAL: lcore 1 is ready (tid=7ffff5e0d700;cpuset=[1])
>>> > > > EAL: Trying to obtain current memory policy.
>>> > > > EAL: Setting policy MPOL_PREFERRED for socket 0
>>> > > > EAL: Restoring previous memory policy: 0
>>> > > >
>>> > > > Could you have a check on this? A lot of test cases in our
>>> validation
>>> > > > team fail because of this. Thanks a lot!
>>> > >
>>> > > Can we just call rte_memseg_walk_thread_unsafe()?
>>> > >
>>> > > +Cc Anatoly
>>> >
>>> > Hi, Thomas
>>> >
>>> > I change to rte_memseg_walk_thread_unsafe(), still
>>> > Can't work.
>>> >
>>> > EAL: Setting policy MPOL_PREFERRED for socket 0
>>> > EAL: Restoring previous memory policy: 0
>>> > EAL: memseg iova 140000000, len 40000000, out of range
>>> > EAL:    using dma mask ffffffffffffffff
>>> > EAL: alloc_pages_on_heap(): couldn't allocate memory due to DMA mask
>>> > EAL: Trying to obtain current memory policy.
>>> > EAL: Setting policy MPOL_PREFERRED for socket 1
>>> > EAL: Restoring previous memory policy: 0
>>> > EAL: memseg iova 1bc0000000, len 40000000, out of range
>>> > EAL:    using dma mask ffffffffffffffff
>>> > EAL: alloc_pages_on_heap(): couldn't allocate memory due to DMA mask
>>> > error allocating rte services array
>>> > EAL: FATAL: rte_service_init() failed
>>> > EAL: rte_service_init() failed
>>> > PANIC in main():
>>>
>>> I think it is showing there are at least 2 issues:
>>>         1/ deadlock
>>>         2/ allocation does not comply with mask check (out of range)
>>>
>>>
>>>


More information about the dev mailing list