[dpdk-dev] [PATCH v2] mempool: Introduce _populate_mz_range api

Santosh Shukla santosh.shukla at caviumnetworks.com
Tue Feb 7 05:00:01 CET 2017


Hi Olivier,

On Mon, Feb 06, 2017 at 06:01:15PM +0100, Olivier Matz wrote:
> On Tue, 31 Jan 2017 20:02:26 +0530, Santosh Shukla
> <santosh.shukla at caviumnetworks.com> wrote:
> > Hi Olivier,
> > 
> > Reply inline.
> > 
> > On Tue, Jan 31, 2017 at 11:31:51AM +0100, Olivier Matz wrote:
> > > Hi Santosh,
> > > 
> > > I guess this patch is targeted for 17.05, right?
> > >  
> > 
> > Yes.
> > 
> > > Please see some other comments below.
> > > 
> > > On Fri, 20 Jan 2017 20:43:41 +0530,
> > > <santosh.shukla at caviumnetworks.com> wrote:  
> > > > From: Santosh Shukla <santosh.shukla at caviumnetworks.com>

[snip..]

> > > > --- a/lib/librte_mempool/rte_mempool.h
> > > > +++ b/lib/librte_mempool/rte_mempool.h
> > > > @@ -387,6 +387,12 @@ typedef int (*rte_mempool_dequeue_t)(struct
> > > > rte_mempool *mp, */
> > > >  typedef unsigned (*rte_mempool_get_count)(const struct
> > > > rte_mempool *mp); +/**
> > > > + * Set the memzone va/pa addr range and len in the external pool.
> > > > + */
> > > > +typedef void (*rte_mempool_populate_mz_range_t)(struct
> > > > rte_mempool *mp,
> > > > +		const struct rte_memzone *mz);
> > > > +  
> > > 
> > > And this API would be:
> > > typedef void (*rte_mempool_populate_t)(struct rte_mempool *mp,
> > > 		char *vaddr, phys_addr_t paddr, size_t len)
> > >  
> > > 
> > > If your hw absolutly needs a contiguous memory, a solution could be:
> > > 
> > > - add a new flag MEMPOOL_F_CONTIG (maybe a better nale could be
> > >   found) saying that all the mempool objects must be contiguous.
> > > - add the ops_populate() callback in rte_mempool_populate_phys(), as
> > >   suggested above
> > > 
> > > Then:
> > > 
> > >   /* create an empty mempool */
> > >   rte_mempool_create_empty(...);
> > > 
> > >   /* set the handler:
> > >    *   in the ext handler, the mempool flags are updated with
> > >    *   MEMPOOL_F_CONTIG
> > >    */
> > >   rte_mempool_set_ops_byname(..., "my_hardware");
> > >   
> > 
> > You mean, ext handler will set mempool flag using 'pool_config'
> > param; somethng like rte_mempool_ops_by_name(..., "my_hardware" ,
> > MEMPOOL_F_CONTIG); ?
> 
> I don't mean changing the API of rte_mempool_set_ops_byname().
> I was suggesting something like this:
> 
> static int your_handler_alloc(struct rte_mempool *mp)
> {
> 	/* handler init */
> 	...
> 
> 	mp->flags |= MEMPOOL_F_CONTIG;
> 	return 0;
> }
> 

Ok,. Will do in v3.

> And update rte_mempool_populate_*() functions to manage this flag:
> instead of segment, just fail if it cannot fit in one segment. It won't
> really solve the issue, but at least it will be properly detected, and
> the user could take dispositions to have more contiguous memory (ex:
> use larger hugepages, allocate hugepages at boot time).
>

Agree.
Will take care in v3.

> > 
> > >   /* if MEMPOOL_F_CONTIG is set, all populate() functions should
> > > ensure
> > >    * that there is only one contiguous zone
> > >    */
> > >   rte_mempool_populate_default(...);
> > >   
> > 
> > I am not too sure about scope of MEMPOOL_F_CONTIG. How
> > MEMPOOL_F_CONTIG flag {setted by application/ driver by calling
> > rte_mempool_create(..., flag)} can make sure only one contiguous
> > zone? Like to understand from you.
> 
> As described above, there would be no change from application point of
> view. The handler would set the mempool flag by itself to change the
> behavior of the populate functions.
> 
> 

Right.

> > 
> > In my understanding:
> > rte_mempool_populate_default() will request for memzone based on
> > mp->size value; And if mp->size more than one hugepage_sz{i.e. one mz
> > request not enough} then it will request more hugepages{ i.e. more mz
> > request},. So IIIU then contiguity not gauranteed.
> 
> Yes, that's how it works today. As EAL cannot guarantees that the
> hugepages are physically contiguous, it tries to segment the mempool
> objects in several memory zones.
> 
> 
> Regards,
> Olivier
> 
> 
> 

Regards,
Santosh



More information about the dev mailing list