[dpdk-dev] mmap fails with more than 40000 hugepages

Neil Horman nhorman at tuxdriver.com
Thu Feb 5 15:59:32 CET 2015


On Thu, Feb 05, 2015 at 01:20:01PM +0000, Damjan Marion (damarion) wrote:
> 
> > On 05 Feb 2015, at 13:59, Neil Horman <nhorman at tuxdriver.com> wrote:
> > 
> > On Thu, Feb 05, 2015 at 12:00:48PM +0000, Damjan Marion (damarion) wrote:
> >> Hi,
> >> 
> >> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK crashes in rte_eal_init()
> >> when number of available hugepages is around 40000 or above.
> >> Everything works fine with lower values (i.e. 30000).
> >> 
> >> I also tried with allocating 40000 on node0 and 0 on node1, same crash happens.
> >> 
> >> 
> >> Any idea what might be causing this?
> >> 
> >> Thanks,
> >> 
> >> Damjan
> >> 
> >> 
> >> $ cat /sys/devices/system/node/node[01]/hugepages/hugepages-2048kB/nr_hugepages
> >> 20000
> >> 20000
> >> 
> >> $ grep -i huge /proc/meminfo
> >> AnonHugePages:    706560 kB
> >> HugePages_Total:   40000
> >> HugePages_Free:    40000
> >> HugePages_Rsvd:        0
> >> HugePages_Surp:        0
> >> Hugepagesize:       2048 kB
> >> 
> > Whats your shmmax value set to? 40000 2MB hugepages is way above the default
> > setting for how much shared ram a system will allow.  I've not done the math on
> > your logs below, but judging by the size of some of the mapped segments, I'm
> > betting your hitting the default limit of 4GB.
> 
> $ cat /proc/sys/kernel/shmmax
> 33554432
> 
> $ sysctl -w kernel.shmmax=8589934592
> kernel.shmmax = 8589934592
> 
> same crash :(
> 
> Thanks,
> 
> Damjan

What about the shmmni and shmmax values.  The shmmax value will also need to be
set to at least 80G (more if you have other shared memory needs), and shmmni
will need to be larger than 40,000 to handle all the segments your creating.
Neil



More information about the dev mailing list