[dpdk-dev] Question about cache_size in rte_mempool_create

Bruce Richardson bruce.richardson at intel.com
Fri Nov 24 10:30:29 CET 2017


On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote:
> Hi,
> 
> In the documentation it says that:
> 
>  * @param cache_size
>  *   If cache_size is non-zero, the rte_mempool library will try to
>  *   limit the accesses to the common lockless pool, by maintaining a
>  *   per-lcore object cache. This argument must be lower or equal to
>  *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5.* It is advised to
> choose*
> * *   cache_size to have "n modulo cache_size == 0": if this is*
> * *   not the case, some elements will always stay in the pool and will*
> * *   never be used.* The access to the per-lcore table is of course
>  *   faster than the multi-producer/consumer pool. The cache can be
>  *   disabled if the cache_size argument is set to 0; it can be useful to
>  *   avoid losing objects in cache.
> 
> I wonder if someone can please explain the high-lightened sentence, how the
> cache size affects the objects inside the ring.

It has no effects upon the objects themselves. Having a cache is
strongly recommended for performance reasons. Accessing a shared ring
for a mempool is very slow compared to pulling packets from a per-core
cache. To test this you can run testpmd using different --mbcache
parameters. 

> And also does it mean that
> if I'm sharing pool between different cores can it be that a core sees the
> pool as empty although it has objects in it?
> 
Yes, that can occur. You need to dimension the pool to take account of
your cache usage.

/Bruce


More information about the dev mailing list