[dpdk-dev] [PATCH v8 1/3] mempool: support external mempool operations

Hunt, David david.hunt at intel.com
Thu Jun 9 15:18:57 CEST 2016



On 9/6/2016 1:30 PM, Jerin Jacob wrote:
> On Thu, Jun 09, 2016 at 11:49:44AM +0000, Shreyansh Jain wrote:
>> Hi Jerin,
> Hi Shreyansh,
>
>>>> Yes, this would simplify somewhat the creation of a pktmbuf pool, in that
>>> it
>>>> replaces
>>>> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
>>>> want
>>>> to introduce a third method of creating a mempool to the developers. If we
>>>> introduced this, we would then have:
>>>> 1. rte_pktmbuf_pool_create()
>>>> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
>>>>     use the configured custom handler)
>>>> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
>>>>     by a call to rte_mempool_set_ops_byname() (would allow several different
>>>> custom
>>>>     handlers to be used in one application
>>>>
>>>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>>> As I mentioned earlier, My take is not to create the separate API's for
>>> external mempool handlers.In my view, It's same,  just that sepreate
>>> mempool handler through function pointers.
>>>
>>> To keep the backward compatibility, I think we can extend the flags
>>> in rte_mempool_create and have a single API external/internal pool
>>> creation(makes easy for existing applications too, add a just mempool
>>> flag command line argument to existing applications to choose the
>>> mempool handler)
>> May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)?
>> That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets.
> It may vary from platform to platform or particular use case. For instance,
> the HW external pool manager for generic buffers may scale better than SW multi
> producers/multi-consumer implementation when the number of cores > N
> (as no locking involved in enqueue/dequeue(again it is depended on
> specific HW implementation))
>
> I thought their no harm in selecting the external pool handlers
> in root level itself(rte_mempool_create) as by default it is
> SW MP/MC and it just an option to override if the application wants it.
>
> Jerin
>


So, how about we go with the following, based on Shreyansh's suggestion:

1. Add in #define MEMPOOL_F_EMM_ALLOC 0x0010  (EMM for External Mempool 
Manager)

2. Check this bit in rte_mempool_create() just before the other bits are 
checked (by the way, the flags check has been moved to 
rte_mempool_create(), as per an earlier patchset, but was inadvertantly 
reverted)

     /*
      * First check to see if we use the config'd mempool hander.
      * Then examine the combinations of SP/SC/MP/MC flags to
      * set the correct index into the table of ops structs.
      */
     if (flags & MEMPOOL_F_EMM_ALLOC)
         rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
     else (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
         rte_mempool_set_ops_byname(mp, "ring_sp_sc");
     else if (flags & MEMPOOL_F_SP_PUT)
         rte_mempool_set_ops_byname(mp, "ring_sp_mc");
     else if (flags & MEMPOOL_F_SC_GET)
         rte_mempool_set_ops_byname(mp, "ring_mp_sc");
     else
         rte_mempool_set_ops_byname(mp, "ring_mp_mc");

3. Modify rte_pktmbuf_pool_create to pass the bit to rte_mempool_create

-        sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+        sizeof(struct rte_pktmbuf_pool_private), socket_id,
+        MEMPOOL_F_PKT_ALLOC);


This will allow legacy apps to use one external handler ( as defined 
RTE_MBUF_DEFAULT_MEMPOOL_OPS) by adding the MEMPOOL_F_EMM_ALLOC bit to 
their flags in the call to rte_mempool_create().

Of course, if an app wants to use more than one external handler, they 
can call create_empty and set_ops_byname() for each mempool they create.

Regards,
Dave.












More information about the dev mailing list