Max size for rte_mempool_create
Stephen Hemminger
stephen at networkplumber.org
Wed Feb 9 23:40:16 CET 2022
On Wed, 9 Feb 2022 22:58:43 +0100
Antonio Di Bacco <a.dibacco.ks at gmail.com> wrote:
> Thanks! I already reserve huge pages from kernel command line . I reserve 6
> 1G hugepages. Is there any other reason for the ENOMEM?
>
> On Wed, 9 Feb 2022 at 22:44, Stephen Hemminger <stephen at networkplumber.org>
> wrote:
>
> > On Wed, 9 Feb 2022 22:20:34 +0100
> > Antonio Di Bacco <a.dibacco.ks at gmail.com> wrote:
> >
> > > I have a system with two numa sockets. Each numa socket has 8GB of RAM.
> > > I reserve a total of 6 hugepages (1G).
> > >
> > > When I try to create a mempool (API rte_mempool_create) of 530432 mbufs
> > > (each one with 9108 bytes) I get a ENOMEM error.
> > >
> > > In theory this mempool should be around 4.8GB and the hugepages are
> > enough
> > > to hold it.
> > > Why is this failing ?
> >
> > This is likely becaus the hugepages have to be contiguous and
> > the kernel has to that many free pages (especially true with 1G pages).
> > Therefore it is recommended to
> > configure and reserve huge pages on kernel command line during boot.
> >
Your calculations look about right:
elementsize = sizeof(struct rte_mbuf)
+ private_size
+ RTE_PKTMBUF_HEADROOM
+ mbuf_size;
object = rte_mempool_calc_objsize(elementsize, 0, NULL);
With mbuf_size of 9108 and typical DPDK defaults:
elementsize = 128 + 128 + 9108 = 9364
mempool rounds 9364 up to cacheline (64) = 9408
mempool object header = 192
objectsize = 9408 + 192 = 9600 bytes per object
Total size of mempool requested = 530432 * 9600 = 4.74G
If this a Numa machine you may need to make sure that the kernel
has decided to put the hugepages on the right socket.
Perhaps it decided to split them across sockets?
More information about the users
mailing list