question about MemPool
Bing Zhao
bingz at nvidia.com
Wed Apr 24 18:37:15 CEST 2024
The concept of memory pool cache is different from the CPU architecture cache.
Unlike some DSP, in the modern CPU and system, it is not a common use case to assign addresses to the cache and lock them as a memory. In especial, the data buffer is too large.
The memory pool cache is a LIFO to store the pointers and trying to reduce the memory footprints and reduce the CPU cache conflict and eviction. (Always try to use the used memory previously)
Only when trying to access the memory itself, a whole cache line will be checked and try to be loaded. It is impossible that a CPU load a whole buffer (2KB for example) directly without any READ / WRITE / FLUSH to the cache.
BR. Bing
From: lonc0002 at yahoo.com <lonc0002 at yahoo.com>
Sent: Thursday, April 25, 2024 12:24 AM
To: users at dpdk.org
Subject: question about MemPool
External email: Use caution opening links or attachments
Hello,
When doing a rte_mempool_get_bulk() with a cache enabled mempool, first objects are retrieved from cache and then from the common pool which I assume is sitting in shared memory (DDR or L3?). Wouldn't accessing the objects from the mempool in shared memory itself pull those objects into processor cache? Can this be avoided?
Thanks,
Vince
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/users/attachments/20240424/3674e7b3/attachment.htm>
More information about the users
mailing list