[dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler

Ananyev, Konstantin konstantin.ananyev at intel.com
Tue Jun 21 11:28:51 CEST 2016



> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
> Sent: Tuesday, June 21, 2016 4:35 AM
> To: Ananyev, Konstantin
> Cc: Thomas Monjalon; dev at dpdk.org; Hunt, David; olivier.matz at 6wind.com; viktorin at rehivetech.com; shreyansh.jain at nxp.com
> Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
> 
> On Mon, Jun 20, 2016 at 05:56:40PM +0000, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
> > > Sent: Monday, June 20, 2016 3:22 PM
> > > To: Ananyev, Konstantin
> > > Cc: Thomas Monjalon; dev at dpdk.org; Hunt, David; olivier.matz at 6wind.com; viktorin at rehivetech.com; shreyansh.jain at nxp.com
> > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
> > >
> > > On Mon, Jun 20, 2016 at 01:58:04PM +0000, Ananyev, Konstantin wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Thomas Monjalon
> > > > > Sent: Monday, June 20, 2016 2:54 PM
> > > > > To: Jerin Jacob
> > > > > Cc: dev at dpdk.org; Hunt, David; olivier.matz at 6wind.com; viktorin at rehivetech.com; shreyansh.jain at nxp.com
> > > > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
> > > > >
> > > > > 2016-06-20 18:55, Jerin Jacob:
> > > > > > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote:
> > > > > > > This is a mempool handler that is useful for pipelining apps, where
> > > > > > > the mempool cache doesn't really work - example, where we have one
> > > > > > > core doing rx (and alloc), and another core doing Tx (and return). In
> > > > > > > such a case, the mempool ring simply cycles through all the mbufs,
> > > > > > > resulting in a LLC miss on every mbuf allocated when the number of
> > > > > > > mbufs is large. A stack recycles buffers more effectively in this
> > > > > > > case.
> > > > > > >
> > > > > > > Signed-off-by: David Hunt <david.hunt at intel.com>
> > > > > > > ---
> > > > > > >  lib/librte_mempool/Makefile            |   1 +
> > > > > > >  lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
> > > > > >
> > > > > > How about moving new mempool handlers to drivers/mempool? (or similar).
> > > > > > In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea.
> > > > >
> > > > > You're probably right.
> > > > > However we need to check and understand what a HW mempool handler will be.
> > > > > I imagine the first of them will have to move handlers in drivers/
> > > >
> > > > Does it mean it we'll have to move mbuf into drivers too?
> > > > Again other libs do use mempool too.
> > > > Why not just lib/librte_mempool/arch/<arch_specific_dir_here>
> > > > ?
> > >
> > > I was proposing only to move only the new
> > > handler(lib/librte_mempool/rte_mempool_stack.c). Not any library or any
> > > other common code.
> > >
> > > Just like DPDK crypto device, Even if it is software implementation its
> > > better to move in driver/crypto instead of lib/librte_cryptodev
> > >
> > > "lib/librte_mempool/arch/" is not correct place as it is platform specific
> > > not architecture specific and HW mempool device may be PCIe or platform
> > > device.
> >
> > Ok, but why rte_mempool_stack.c has to be moved?
> 
> Just thought of having all the mempool handlers at one place.
> We can't move all HW mempool handlers at lib/librte_mempool/

Yep, but from your previous mail I thought we might have specific ones
for specific devices, no? 
If so, why to put them in one place, why just not in:
Drivers/xxx_dev/xxx_mempool.[h,c]
?
And keep generic ones in lib/librte_mempool
?
Konstantin

> 
> Jerin
> 
> > I can hardly imagine it is a 'platform sepcific'.
> > From my understanding it is a generic code.
> > Konstantin
> >
> >
> > >
> > > > Konstantin
> > > >
> > > >
> > > > > Jerin, are you volunteer?


More information about the dev mailing list