[dpdk-dev] [RFC] mempool: introduce indexed memory pool

Jerin Jacob jerinjacobk at gmail.com
Sat Oct 19 14:28:00 CEST 2019


On Fri, 18 Oct, 2019, 3:40 pm Xueming(Steven) Li, <xuemingl at mellanox.com>
wrote:

> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk at gmail.com>
> > Sent: Friday, October 18, 2019 12:41 AM
> > To: Xueming(Steven) Li <xuemingl at mellanox.com>
> > Cc: Olivier Matz <olivier.matz at 6wind.com>; Andrew Rybchenko
> > <arybchenko at solarflare.com>; dpdk-dev <dev at dpdk.org>; Asaf Penso
> > <asafp at mellanox.com>; Ori Kam <orika at mellanox.com>; Stephen
> > Hemminger <stephen at networkplumber.org>
> > Subject: Re: [dpdk-dev] [RFC] mempool: introduce indexed memory pool
> >
> > On Thu, Oct 17, 2019 at 6:43 PM Xueming(Steven) Li
> > <xuemingl at mellanox.com> wrote:
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk at gmail.com>
> > > > Sent: Thursday, October 17, 2019 3:14 PM
> > > > To: Xueming(Steven) Li <xuemingl at mellanox.com>
> > > > Cc: Olivier Matz <olivier.matz at 6wind.com>; Andrew Rybchenko
> > > > <arybchenko at solarflare.com>; dpdk-dev <dev at dpdk.org>; Asaf Penso
> > > > <asafp at mellanox.com>; Ori Kam <orika at mellanox.com>
> > > > Subject: Re: [dpdk-dev] [RFC] mempool: introduce indexed memory pool
> > > >
> > > > On Thu, Oct 17, 2019 at 12:25 PM Xueming Li <xuemingl at mellanox.com>
> > wrote:
> > > > >
> > > > > Indexed memory pool manages memory entries by index, allocation
> > > > > from pool returns both memory pointer and index(ID). users save ID
> > > > > as u32 or less(u16) instead of traditional 8 bytes pointer. Memory
> > > > > could be retrieved from pool or returned to pool later by index.
> > > > >
> > > > > Pool allocates backend memory in chunk on demand, pool size grows
> > > > > dynamically. Bitmap is used to track entry usage in chunk, thus
> > > > > management overhead is one bit per entry.
> > > > >
> > > > > Standard rte_malloc demands malloc overhead(64B) and minimal data
> > > > > size(64B). This pool aims to such cost saving also pointer size.
> > > > > For scenario like creating millions of rte_flows each consists of
> > > > > small pieces of memories, the difference is huge.
> > > > >
> > > > > Like standard memory pool, this lightweight pool only support
> > > > > fixed size memory allocation. Pools should be created for each
> > > > > different size.
> > > > >
> > > > > To facilitate memory allocated by index, a set of ILIST_XXX macro
> > > > > defined to operate entries as regular LIST.
> > > > >
> > > > > By setting entry size to zero, pool can be used as ID generator.
> > > > >
> > > > > Signed-off-by: Xueming Li <xuemingl at mellanox.com>
> > > > > ---
> > > > >  lib/librte_mempool/Makefile                |   3 +-
> > > > >  lib/librte_mempool/rte_indexed_pool.c      | 289
> > +++++++++++++++++++++
> > > > >  lib/librte_mempool/rte_indexed_pool.h      | 224 ++++++++++++++++
> > > >
> > > > Can this be abstracted over the driver interface instead of creating
> a new
> > APIS?
> > > > ie using drivers/mempool/
> > >
> > > The driver interface manage memory entries with pointers, while this
> api
> > uses u32 index as key...
> >
> > I see. As a use case, it makes sense to me.
>
> > Have you checked the possibility reusing/extending
> > lib/librte_eal/common/include/rte_bitmap.h for bitmap management,
> > instead of rolling a new one?
>
> Yes, the rte_bitmap designed for fixed bitmap size, to grow, have to copy
> almost entire bitmap(array1+array2).
> This pool distribute array2 into each trunk, and the trunk array actually
> plays the array1 role.
> When growing, just grow array1 which is smaller, no touch to existing
> array2 in each trunk.
>

IMO, Growing bit map is generic problem so moving bitmap management logic
to common place will be usefull for other libraries in future. My
suggestion would be to enchanse rte_bitmap to support dynamic bitmap
through new APIs.



> The map_xxx() naming might confused people, I'll make following change in
> next version:
>         map_get()/map_set(): only used once and the code is simple, move
> code into caller.
>         map_is_empty()/map_clear()/ : unused, remove
>         map_clear_any(): relative simple, embed into caller.
>


More information about the dev mailing list