[dpdk-dev] Question about cache_size in rte_mempool_create

Roy Shterman roy.shterman at gmail.com
Fri Nov 24 12:01:08 CET 2017



נשלח מה-iPhone שלי

‫ב-24 בנוב׳ 2017, בשעה 12:03, ‏‏Bruce Richardson ‏<bruce.richardson at intel.com> כתב/ה:‬

>> On Fri, Nov 24, 2017 at 11:39:54AM +0200, roy wrote:
>> Thanks for your answer, but I cannot understand the dimension of the ring
>> and it is affected by the cache size.
>> 
>>> On 24/11/17 11:30, Bruce Richardson wrote:
>>>> On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote:
>>>> Hi,
>>>> 
>>>> In the documentation it says that:
>>>> 
>>>>  * @param cache_size
>>>>  *   If cache_size is non-zero, the rte_mempool library will try to
>>>>  *   limit the accesses to the common lockless pool, by maintaining a
>>>>  *   per-lcore object cache. This argument must be lower or equal to
>>>>  *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5.* It is advised to
>>>> choose*
>>>> * *   cache_size to have "n modulo cache_size == 0": if this is*
>>>> * *   not the case, some elements will always stay in the pool and will*
>>>> * *   never be used.* The access to the per-lcore table is of course
>>>>  *   faster than the multi-producer/consumer pool. The cache can be
>>>>  *   disabled if the cache_size argument is set to 0; it can be useful to
>>>>  *   avoid losing objects in cache.
>>>> 
>>>> I wonder if someone can please explain the high-lightened sentence, how the
>>>> cache size affects the objects inside the ring.
>>> It has no effects upon the objects themselves. Having a cache is
>>> strongly recommended for performance reasons. Accessing a shared ring
>>> for a mempool is very slow compared to pulling packets from a per-core
>>> cache. To test this you can run testpmd using different --mbcache
>>> parameters.
>> Still, I didn't understand the sentence from above:
>> 
>> *It is advised to choose cache_size to have "n modulo cache_size == 0": if
>> this is* not the case, some elements will always stay in the pool and will*
>> never be used.*
>> 
> 
> This would be an artifact of the way in which the elements are taken
> from the pool ring. If a full cache-size burst of elements is not
> available in the ring, no elements from the ring are put in the cache.
> It just means that the pool can never fully be emptied. However, in most
> cases, even having the pool nearly empty indicates a problem, so
> practically I wouldn't be worried about this.
> 

But in case we tried to get cache size from pool and failed we will try to get the number on objects defined in rte_mempool_get_bulk, so in case rte_mempool_get() usage we will try to get one object out of the pool (ring) so also if there isn't cache size in the ring itself each core can get 1 object in each rte_memoool_get until the pool is empty, am I wrong?

>>> 
>>>> And also does it mean that
>>>> if I'm sharing pool between different cores can it be that a core sees the
>>>> pool as empty although it has objects in it?
>>>> 
>>> Yes, that can occur. You need to dimension the pool to take account of
>>> your cache usage.
>> 
>> can you please elaborate more on this issue? I'm working with multi-consumer
>> multi-producer pools, in my understanding object can or in lcore X cache or
>> in ring.
>> Each core when looking for objects in pool (ring) is looking at prod/cons
>> head/tail so how can it be that the cache of different cores affects this?
>> 
> 
> Mempool elements in the caches are free elements that are available for
> use. However, they are inaccessible to any core except the core which
> freed them to that cache. For a slightly simplified example, consider a
> pool with 256 elements, and a cache size of 64. If 4 cores all request 1
> buffer each, those four cores will each fill their caches and then take
> 1 element from those caches. This means that the ring will be empty even
> though only 4 buffers are actually in use - the other 63*4 buffers are
> in per-core caches. A 5th core which comes along and requests a buffer
> will be told that the pool is empty, despite there being 252 free
> elements.
> 
> Therefore you need to take account of the possibilities of buffers being
> stored in caches when dimensioning your mempool.
> 
> /Bruce


More information about the dev mailing list