[dpdk-dev] [PATCH v7 1/5] mempool: support external mempool operations

Hunt, David david.hunt at intel.com
Fri Jun 3 12:28:14 CEST 2016



On 6/3/2016 7:38 AM, Jerin Jacob wrote:
> On Thu, Jun 02, 2016 at 02:27:19PM +0100, David Hunt wrote:
>
> [snip]
>
>>   	/* create the internal ring if not already done */
>>   	if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
> |> 1) May be RING can be replaced with some other higher abstraction name
> |> for the internal MEMPOOL_F_RING_CREATED flag
> |
> |Agreed. I'll change to MEMPOOL_F_POOL_CREATED, since we're already
> |changing the *ring
> |element of the mempool struct to *pool
>
> Looks like you have missed the above review comment?

Ah, yes, I'll address in the next patch.

I've to remove some 'const's in the alloc functions anyway
which I introduced in the last revision, which I shouldn't have. Future 
handlers
(including the stack hander) will need to set the pool_data in the alloc.


>
> [snip]
>
>> static inline struct rte_mempool_ops *
>> rte_mempool_ops_get(int ops_index)
>>
>> return &rte_mempool_ops_table.ops[ops_index];
> |> 2) Considering "get" and "put" are the fast-path callbacks for
> |> pool-manger, Is it possible to avoid the extra overhead of the
> |> following
> |> _load_ and additional cache line on each call,
> |> rte_mempool_handler_table.handler[handler_idx]
> |>
> |> I understand it is for multiprocess support but I am thing can we
> |> introduce something like ethernet API support for multiprocess and
> |> resolve "put" and "get" functions pointer on init and store in
> |> struct mempool. Some thinking like
> |>
> |> file: drivers/net/ixgbe/ixgbe_ethdev.c
> |> search for if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> |
> |I'll look at this one before posting the next version of the patch
> |(soon). :)
>
> Have you checked the above comment, if it difficult then postpone it. But
> IMO it will reduce few cycles in fast-path and reduce the cache usage in
> fast path
>
> [snip]

I looked at it, but I'd need to do some more digging into it to see how 
it can be
done properly. I'm not seeing any performance drop at the moment, and it 
may
be a good way to improve further down the line. Is it OK to postpone?

>> +
>> +/**
>> + * @internal wrapper for external mempool manager put callback.
>> + *
>> + * @param mp
>> + *   Pointer to the memory pool.
>> + * @param obj_table
>> + *   Pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   Number of objects to put.
>> + * @return
>> + *   - 0: Success; n objects supplied.
>> + *   - <0: Error; code of put function.
>> + */
>> +static inline int
>> +rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
>> +		unsigned n)
>> +{
>> +	struct rte_mempool_ops *ops;
>> +
>> +	ops = rte_mempool_ops_get(mp->ops_index);
>> +	return ops->put(mp->pool_data, obj_table, n);
> Pass by value of "pool_data", On 32 bit systems, casting back to pool_id will
> be an issue as void* on 32 bit is 4B. IMO, May be can use uint64_t to
> pass by value and typecast to void* to fix it.

OK. I see the problem. I'll see 4 callbacks that need to change, free, 
get, put and get_count.
So the callbacks will be:
typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
typedef void (*rte_mempool_free_t)(uint64_t p);
typedef int (*rte_mempool_put_t)(uint64_t p, void * const *obj_table, 
unsigned int n);
typedef int (*rte_mempool_get_t)(uint64_t p, void **obj_table, unsigned 
int n);
typedef unsigned (*rte_mempool_get_count)(uint64_t p);


Regards,
Dave.



More information about the dev mailing list