[dpdk-dev] mmap fails with more than 40000 hugepages

Damjan Marion (damarion) damarion at cisco.com
Thu Feb 5 14:36:37 CET 2015


On 05 Feb 2015, at 14:22, Jay Rolette <rolette at infiniteio.com<mailto:rolette at infiniteio.com>> wrote:

On Thu, Feb 5, 2015 at 6:00 AM, Damjan Marion (damarion) <damarion at cisco.com<mailto:damarion at cisco.com>> wrote:
Hi,

I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK crashes in rte_eal_init()
when number of available hugepages is around 40000 or above.
Everything works fine with lower values (i.e. 30000).

I also tried with allocating 40000 on node0 and 0 on node1, same crash happens.


Any idea what might be causing this?

Any reason you can't switch to using 1GB hugepages? You'll get better performance and your init time will be shorter. The systems we run on are similar (256GB, 2 NUMA nodes) and that works fine for us.

Yes, unfortunately some other consumers are needing smaller pages


Not directly related, but if you have to stick with 2MB hugepages, you might want to take a look at a patch I submitted that fixes the O(n^2) algorithm used in initializing hugepages.

I tried it hoping that it will change something, no luck…

Thanks,

Damjan



More information about the dev mailing list