[dpdk-dev] memory allocation requirements

Sergio Gonzalez Monroy sergio.gonzalez.monroy at intel.com
Thu Apr 14 17:39:02 CEST 2016


On 14/04/2016 15:46, Olivier MATZ wrote:
> Hi,
>
> On 04/13/2016 06:03 PM, Thomas Monjalon wrote:
>> After looking at the patches for container support, it appears that
>> some changes are needed in the memory management:
>> http://thread.gmane.org/gmane.comp.networking.dpdk.devel/32786/focus=32788 
>>
>>
>> I think it is time to collect what are the needs and expectations of
>> the DPDK memory allocator. The goal is to satisfy every needs while
>> cleaning the API.
>> Here is a first try to start the discussion.
>>
>> The memory allocator has 2 classes of API in DPDK.
>> First the user/application allows or requires DPDK to take over some
>> memory resources of the system. The characteristics can be:
>>     - numa node
>>     - page size
>>     - swappable or not
>>     - contiguous (cannot be guaranteed) or not
>>     - physical address (as root only)
>> Then the drivers or other libraries use the memory through
>>     - rte_malloc
>>     - rte_memzone
>>     - rte_mempool
>> I think we can integrate the characteristics of the requested memory
>> in rte_malloc. Then rte_memzone would be only a named rte_malloc.
>> The rte_mempool still focus on collection of objects with cache.
>
> Just to mention that some evolutions [1] are planned in mempool in
> 16.07, allowing to populate a mempool with several chunks of memory,
> and still ensuring that the objects are physically contiguous. It
> completely removes the need to allocate a big virtually contiguous
> memory zone (and also physically contiguous if not using
> rte_mempool_create_xmem(), which is probably the case in most of
> the applications).
>
> Knowing this, the code that remaps the hugepages to get the largest
> possible physically contiguous zone probably becomes useless after
> the mempool series. Changing it to only one mmap(file) in hugetlbfs
> per NUMA socket would clearly simplify this part of EAL.
>

Are you suggesting to make those changes after the mempool series
has been applied but keeping the current memzone/malloc behavior?

Regards,
Sergio

> For other allocations that must be physically contiguous (ex: zones
> shared with the hardware), having a page-sized granularity is maybe
> enough.
>
> Regards,
> Olivier
>
> [1] http://dpdk.org/ml/archives/dev/2016-April/037464.html



More information about the dev mailing list