[dpdk-dev] CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES: no difference in memory pool allocations

Ilya Maximets i.maximets at samsung.com
Mon Nov 26 13:23:16 CET 2018


> Hi,
> 
> We have 2 NUMAs in our system, and we try to allocate a single DPDK memory pool on each NUMA.
> However, we see no difference when enabling/disabling "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration.
> We expected that disabling it will allocate pools only on one NUMA (probably NUMA0), but it actually allocates pools on both NUMAs, according to "socket_id" parameter passed to "rte_mempool_create" API.
> We have 192GB memory, so NUMA1 memory starts from address: 0x1800000000.
> As you can see below, "undDpdkPoolNameSocket_1" was indeed allocated on NUMA1, as we wanted, although "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" is disabled:
> 
> CONFIG_RTE_LIBRTE_VHOST_NUMA=n
> CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
> 
> created poolName=undDpdkPoolNameSocket_0, nbufs=887808, bufferSize=2432, total=2059MB
> (memZone: name=MP_undDpdkPoolNameSocket_0, socket_id=0, vaddr=0x1f2c0427d00-0x1f2c05abe00, paddr=0x178e627d00-0x178e7abe00, len=1589504, hugepage_sz=2MB)
> created poolName=undDpdkPoolNameSocket_1, nbufs=887808, bufferSize=2432, total=2059MB
> (memZone: name=MP_undDpdkPoolNameSocket_1, socket_id=1, vaddr=0x1f57fa7be40-0x1f57fbfff40, paddr=0x2f8247be40-0x2f825fff40, len=1589504, hugepage_sz=2MB)
> 
> Does anyone know what is "CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES" configuration used for?
> 

This config option was introduced to force DPDK to allocate memory
from NUMA nodes that was requested by 'socket_id'. That is exactly
what you're observing.

Look at the commit 1b72605d2416 ("mem: balanced allocation of hugepages")
for the original issue fixed by this option.

----

> Hi Asaf,
> 
> I cannot reproduce this behavior. Just tried running testpmd with DPDK 
> 18.08 as well as latest master [1], and DPDK could not successfully 
> allocate a mempool on socket 1.

I think that this is a bug. Because, if option enabled, you should successfully
allocate the memory from the requested NUMA if it's available.

If option disabled, we just requesting pages from the kernel and it could
return them from any NUMA node. With option enabled, we're trying to
force kernel to allocate from the nodes we need.

Best regards, Ilya Maximets.


More information about the dev mailing list