[dpdk-dev] usages issue with external mempool

Jerin Jacob jerin.jacob at caviumnetworks.com
Wed Jul 27 11:51:29 CEST 2016


On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote:
> Hi,
>                There was lengthy discussions w.r.t external mempool patches. However, I am still finding usages issue with the agreed approach.
> 
> The existing API to create packet mempool, "rte_pktmbuf_pool_create" does not provide the option to change the object init iterator. This may be the reason that many applications (e.g. OVS) are using rte_mempool_create to create packet mempool  with their own object iterator (e.g. ovs_rte_pktmbuf_init).
> 
> e.g the existing usages are:
>         dmp->mp = rte_mempool_create(mp_name, mp_size, MBUF_SIZE(mtu),
>                                      MP_CACHE_SZ,
>                                      sizeof(struct rte_pktmbuf_pool_private),
>                                      rte_pktmbuf_pool_init, NULL,
>                                      ovs_rte_pktmbuf_init, NULL,
>                                     socket_id, 0);
> 
> 
> With the new API set for packet pool create, this need to be changed to:
> 
>         dmp->mp = rte_mempool_create_empty(mp_name, mp_size, MBUF_SIZE(mtu),
>                                      MP_CACHE_SZ,
>                                      sizeof(struct rte_pktmbuf_pool_private),
>                                      socket_id, 0);
>                               if (dmp->mp == NULL)
>                                              break;
> 
>                               rte_errno = rte_mempool_set_ops_byname(dmp-mp,
>                                                             RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
>                               if (rte_errno != 0) {
>                                              RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
>                                              return NULL;
>                               }
>                               rte_pktmbuf_pool_init(dmp->mp, NULL);
> 
>                               ret = rte_mempool_populate_default(dmp->mp);
>                               if (ret < 0) {
>                                              rte_mempool_free(dmp->mp);
>                                              rte_errno = -ret;
>                                              return NULL;
>                               }
> 
>                               rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL);
> 
> This is not a user friendly approach to ask for changing 1 API to 6 new APIs. Or, am I missing something?

I agree, To me, this is very bad. I have raised this concern earlier
also

Since applications like OVS goes through "rte_mempool_create" for
even packet buffer pool creation. IMO it make senses to extend
"rte_mempool_create" to take one more argument to provide external pool
handler name(NULL for default). I don't see any valid technical reason
to treat external pool handler based mempool creation API different
from default handler.

Oliver, David

Thoughts ?

If we agree on this then may be I can send the API deprecation notices for
rte_mempool_create for v16.11

Jerin


> 
> I think, we should do one of the following:
> 
> 1. Enhance "rte_pktmbuf_pool_create" to optionally accept "rte_mempool_obj_cb_t *obj_init, void *obj_init_arg" as inputs. If obj_init is not present, default can be used.
> 2. Create a new wrapper API (e.g. e_pktmbuf_pool_create_new) with  the above said behavior e.g.:
> /* helper to create a mbuf pool */
> struct rte_mempool *
> rte_pktmbuf_pool_create_new(const char *name, unsigned n,
>                unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
> rte_mempool_obj_cb_t *obj_init, void *obj_init_arg,
>                int socket_id)
> 3. Let the existing rte_mempool_create accept flag as "MEMPOOL_F_HW_PKT_POOL". Obviously, if this flag is set - all other flag values should be ignored. This was discussed earlier also.
> 
> Please share your opinion.
> 
> Regards,
> Hemant
> 
> 


More information about the dev mailing list