Mbuf pool cache size in share-nothing applications
Morten Brørup
mb at smartsharesystems.com
Sat Oct 5 19:27:42 CEST 2024
> From: Igor Gutorov [mailto:igootorov at gmail.com]
> Sent: Friday, 4 October 2024 15.49
>
> Hi,
>
> I'm a bit confused about certain semantics of `cache_size` in memory
> pools. I'm working on a DPDK application where each Rx queue gets its
> own mbuf mempool. The memory pools are never shared between lcores,
> mbufs are never passed between lcores, and so the deallocation of an
> mbuf will happen on the same lcore where it was allocated on (it is a
> run-to-completion application). Is my understanding correct, that this
> completely eliminates any lock contention, and so `cache_size` can
> safely be set to 0?
Correct.
However, accessing objects in the cache is faster than accessing objects in the backing pool, because the cache is accessed through optimized inline functions.
>
> Also, `rte_pktmbuf_pool_create()` internally calls
> `rte_mempool_create()` with the default `flags`. Would there be a
> performance benefit in creating mempools manually with the
> `RTE_MEMPOOL_F_SP_PUT` and `RTE_MEMPOOL_F_SC_GET` flags set?
If you want higher performance, create the mbuf pools with a large cache. Then your application will rarely access the mempool's backend, so its flags have less significance.
-Morten
More information about the users
mailing list