[dpdk-dev] A question about hugepage initialization time

Matt Laswell laswell at infiniteio.com
Tue Dec 9 23:05:16 CET 2014


Hey Everybody,

Thanks for the feedback.  Yeah, we're pretty sure that the amount of memory
we work with is atypical, and we're hitting something that isn't an issue
for most DPDK users.

To clarify, yes, we're using 1GB hugepages, and we set them up via
hugepagesz and hugepages= in our kernel's grub line.  We find that when we
use four 1GB huge pages, eal memory init takes a couple of seconds, which
is no big deal.  When we use 128 1GB pages, though, memory init can take
several minutes.   The concern is that we will very likely use even more
memory in the future.  Our boot time is mostly just a nuisance now;
nonlinear growth in memory init time may transform it into a larger problem.

We've had to disable transparent hugepages due to latency issues with
in-memory databases.  I'll have to look at the possibility of alternative
memset implementations.  Perhaps some profiler time is in my future.

Again, thanks to everybody for the useful information.

--
Matt Laswell
laswell at infiniteio.com
infinite io, inc.

On Tue, Dec 9, 2014 at 1:06 PM, Matthew Hall <mhall at mhcomputing.net> wrote:

> On Tue, Dec 09, 2014 at 10:33:59AM -0600, Matt Laswell wrote:
> > Our DPDK application deals with very large in memory data structures, and
> > can potentially use tens or even hundreds of gigabytes of hugepage
> memory.
>
> What you're doing is an unusual use case and this is open source code where
> nobody might have tested and QA'ed this yet.
>
> So my recommendation would be adding some rte_log statements to measure the
> various steps in the process to see what's going on. Also using the Linux
> Perf
> framework to do low-overhead sampling-based profiling, and making sure
> you've
> got everything compiled with debug symbols so you can see what's consuming
> the
> execution time.
>
> You might find that it makes sense to use some custom allocators like
> jemalloc
> alongside of the DPDK allocators, including perhaps "transparent hugepage
> mode" in your process, and some larger page sizes to reduce the number of
> pages.
>
> You can also use this handy kernel options, hugepagesz=<size> hugepages=N .
> This creates guaranteed-contiguous known-good hugepages during boot which
> initialize much more quickly with less trouble and glitches in my
> experience.
>
> https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
> https://www.kernel.org/doc/Documentation/vm/transhuge.txt
>
> There is no one-size-fits-all solution but these are some possibilities.
>
> Good Luck,
> Matthew.
>


More information about the dev mailing list