[dpdk-dev] A question about hugepage initialization time

Matthew Hall mhall at mhcomputing.net
Tue Dec 9 20:06:49 CET 2014


On Tue, Dec 09, 2014 at 10:33:59AM -0600, Matt Laswell wrote:
> Our DPDK application deals with very large in memory data structures, and
> can potentially use tens or even hundreds of gigabytes of hugepage memory.

What you're doing is an unusual use case and this is open source code where 
nobody might have tested and QA'ed this yet.

So my recommendation would be adding some rte_log statements to measure the 
various steps in the process to see what's going on. Also using the Linux Perf 
framework to do low-overhead sampling-based profiling, and making sure you've 
got everything compiled with debug symbols so you can see what's consuming the 
execution time.

You might find that it makes sense to use some custom allocators like jemalloc 
alongside of the DPDK allocators, including perhaps "transparent hugepage 
mode" in your process, and some larger page sizes to reduce the number of 
pages.

You can also use this handy kernel options, hugepagesz=<size> hugepages=N . 
This creates guaranteed-contiguous known-good hugepages during boot which 
initialize much more quickly with less trouble and glitches in my experience.

https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
https://www.kernel.org/doc/Documentation/vm/transhuge.txt

There is no one-size-fits-all solution but these are some possibilities.

Good Luck,
Matthew.


More information about the dev mailing list