[dpdk-dev] [PATCH v5 1/3] mempool: support external handler

Jerin Jacob jerin.jacob at caviumnetworks.com
Tue May 31 18:03:42 CEST 2016


On Tue, May 31, 2016 at 04:37:02PM +0100, Hunt, David wrote:
> 
> 
> On 5/31/2016 9:53 AM, Jerin Jacob wrote:
> > On Mon, May 30, 2016 at 12:27:26PM +0100, Hunt, David wrote:
> > > New mempool handlers will use rte_mempool_create_empty(),
> > > rte_mempool_set_handler(),
> > > then rte_mempool_populate_*(). These three functions are new to this
> > > release, to no problem
> > Having separate APIs for external pool-manager create is worrisome in
> > application perspective. Is it possible to have rte_mempool_[xmem]_create
> > for the both external and existing SW pool manager and make
> > rte_mempool_create_empty and rte_mempool_populate_*  internal functions.
> > 
> > IMO, We can do that by selecting  specific rte_mempool_set_handler()
> > based on _flags_ encoding, something like below
> > 
> > bit 0 - 16   // generic bits uses across all the pool managers
> > bit 16 - 23  // pool handler specific flags bits
> > bit 24 - 31  // to select the specific pool manager(Up to 256 different flavors of
> > pool managers, For backward compatibility, make '0'(in 24-31) to select
> > existing SW pool manager.
> > 
> > and applications can choose the handlers by selecting the flag in
> > rte_mempool_[xmem]_create, That way it will be easy in testpmd or any other
> > applications to choose the pool handler from command line etc in future.
> 
> There might be issues with the 8-bit handler number, as we'd have to add an
> api call to
> first get the index of a given hander by name, then OR it into the flags.
> That would mean
> also extra API calls for the non-default external handlers. I do agree with
> the handler-specific
> bits though.

That would be an internal API(upper 8 bits to handler name). Right ?
Seems to be OK for me.

> 
> Having the _empty and _set_handler  APIs seems to me to be OK for the
> moment. Maybe Olivier could comment?
> 

But need 3 APIs. Right? _empty , _set_handler and _populate ? I believe
it is better reduce the public API in spec where ever possible ?

Maybe Olivier could comment ?


> > and we can remove "mbuf: get default mempool handler from configuration"
> > change-set OR just add if RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is defined then set
> > the same with rte_mempool_set_handler in rte_mempool_[xmem]_create.
> > 
> > What do you think?
> 
> The "configuration" patch is to allow users to quickly change the mempool
> handler
> by changing RTE_MBUF_DEFAULT_MEMPOOL_HANDLER to another string of a known
> handler. It could just as easily be left out and use the rte_mempool_create.
>

Yes, I understand, but I am trying to avoid build time constant. IMO, It
would be better by default RTE_MBUF_DEFAULT_MEMPOOL_HANDLER is not
defined in config. and for quick change developers can introduce the build 
with RTE_MBUF_DEFAULT_MEMPOOL_HANDLER="specific handler"

 
> > > to add a parameter to one of them for the config data. Also since we're
> > > adding some new
> > > elements to the mempool structure, how about we add a new pointer for a void
> > > pointer to a
> > > config data structure, as defined by the handler.
> > > 
> > > So, new element in rte_mempool struct alongside the *pool
> > >      void *pool;
> > >      void *pool_config;
> > > 
> > > Then add a param to the rte_mempool_set_handler function:
> > > int
> > > rte_mempool_set_handler(struct rte_mempool *mp, const char *name, void
> > > *pool_config)
> > IMO, Maybe we need to have _set_ and _get_.So I think we can have
> > two separate callback in external pool-manger for that if required.
> > IMO, For now, We can live with pool manager specific 8 bits(bit 16 -23)
> > for the configuration as mentioned above and add the new callbacks for
> > set and get when required?
> 
> OK, We'll keep the config to the 8 bits of the flags for now. That will also
> mean I won't
> add the pool_config void pointer either (for the moment)

OK to me.

> 
> > > > 2) IMO, It is better to change void *pool in struct rte_mempool to
> > > > anonymous union type, something like below, so that mempool
> > > > implementation can choose the best type.
> > > > 	union {
> > > > 		void *pool;
> > > > 		uint64_t val;
> > > > 	}
> > > Could we do this by using the union for the *pool_config suggested above,
> > > would that give
> > > you what you need?
> > It would be an extra overhead for external pool manager to _alloc_ memory
> > and store the allocated pointer in mempool struct(as *pool) and use pool for
> > pointing other data structures as some implementation need only
> > limited bytes to store the external pool manager specific context.
> > 
> > In order to fix this problem, We may classify fast path and slow path
> > elements in struct rte_mempool and move all fast path elements in first
> > cache line and create an empty opaque space in the remaining bytes in the
> > cache line so that if the external pool manager needs only limited space
> > then it is not required to allocate the separate memory to save the
> > per core cache  in fast-path
> > 
> > something like below,
> > union {
> > 	void *pool;
> > 	uint64_t val;
> > 	uint8_t extra_mem[16] // available free bytes in fast path cache line
> > 
> > }
> 
> Something for the future, perhaps? Will the 8-bits in the flags suffice for
> now?

OK. But simple anonymous union for same type should be OK add now? Not
much change I believe, If its difficult then postpone it

union {
	void *pool;
	uint64_t val;
}

> 
> 
> > Other points,
> > 
> > 1) Is it possible to remove unused is_mp in  __mempool_put_bulk
> > function as it is just a internal function.
> 
> Fixed

__mempool_get_bulk too.

 
> > 2) Considering "get" and "put" are the fast-path callbacks for
> > pool-manger, Is it possible to avoid the extra overhead of the following
> > _load_ and additional cache line on each call,
> > rte_mempool_handler_table.handler[handler_idx]
> > 
> > I understand it is for multiprocess support but I am thing can we
> > introduce something like ethernet API support for multiprocess and
> > resolve "put" and "get" functions pointer on init and store in
> > struct mempool. Some thinking like
> > 
> > file: drivers/net/ixgbe/ixgbe_ethdev.c
> > search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> 
> I'll look at this one before posting the next version of the patch (soon).
> :)

OK

> 
> 
> > Jerin
> > 
> Thanks for your input on this, much appreciated.
> Dave.
> 


More information about the dev mailing list