[dpdk-dev] [PATCH] eal: change max hugepage sizes to 4

Gagandeep Singh G.Singh at nxp.com
Wed Aug 7 14:47:06 CEST 2019



> 
> On Wed, Aug 7, 2019 at 12:26 PM Gagandeep Singh <g.singh at nxp.com> wrote:
> >
> > DPDK currently is supporting maximum 3 hugepage,
> > sizes whereas system can support more than this e.g.
> > 64K, 2M, 32M and 1G.
> 
> You can mention ARM platform here, and that this issue starts with
> kernel 5.2 (and I would try to mention this in the title as well).
> This is better than an annotation that will be lost.
> 
> 
> > Having these four hugepage sizes available to use by DPDK,
> > which is valid in case of '--in-memory' EAL option or
> > using 4 separate mount points for each hugepage size;
> > hugepage_info_init() API reports an error.
> 
> Can you describe what is the impact from a user point of view rather
> than mentioning this internal function?
> 
> 
> > This change increases the maximum supported mount points
> > to 4.
> 
> I suppose this fix does the trick for you.
> However, we are in internal structures and I can't think of an impact
> on datapath.
> So we might as well use dynamic allocations rather than just enlarge this array.
> 
> Did you consider this?
Yes, we have thought about it, but that would mean a lot more testing is required for all supported kernel or may be on some stacks as well.
MAX_HUGEPAGE_SIZES is set as a static value 3 since beginning while ARM (or may be some other platforms) is supporting 4 sizes since very long.

The value of this macro has not changed from long. It is just a mismatch between what DPDK is supporting and what underneath hardware is supporting.
This issue is coming now because in kernel 5.2, kernel is making the directories by default (and not taking from the bootargs) for each hugepage sizes.
Here are the possible cases that we are aware of

For 64KB granule, the kernel supports the following huge page sizes:
        2MB     using 32 x 64KB pages which are contiguous
        512MB   using a level 2 block mapping (a pmd_t)
        16GB    using 32 x 512MB block mappings

For a 16KB granule, we have:
        2MB     using 128 x 16KB pages
        32MB    using a level 2 block mapping (a pmd_t)
        1GB     using 32 x 32MB block mappings

For 4KB granule, we have:
        64KB    using 16 x 4KB pages
        2MB     using a level 2 block mapping (a pmd_t)
        32MB    using 16 x level 2 block mappings
        1GB     using a level 1 block mapping (a pud_t)

And using the static value of 4, it should cover all cases.

> 
> 
> --
> David Marchand


More information about the dev mailing list