[dpdk-dev] dpdk multi process increase the number of mbufs, throughput gets dropped
Bruce Richardson
bruce.richardson at intel.com
Fri Dec 18 13:07:41 CET 2015
On Thu, Dec 17, 2015 at 12:18:36PM +0800, 张伟 wrote:
> Hi all,
>
>
> When running the multi process example, does anybody know that why increasing the number of mbufs, the performance gets dropped.
>
>
> In multi process example, there are two macros which are related to the number of mbufs
>
>
> #defineMBUFS_PER_CLIENT1536
> |
> | #defineMBUFS_PER_PORT1536 |
> | |
>
>
> If increasing these two numbers by 8 times, the performance drops about 10%. Does anybody know why?
>
> | constunsigned num_mbufs = (num_clients * MBUFS_PER_CLIENT) \ |
> | | + (ports->num_ports * MBUFS_PER_PORT); |
> | pktmbuf_pool = rte_mempool_create(PKTMBUF_POOL_NAME, num_mbufs, |
> | | MBUF_SIZE, MBUF_CACHE_SIZE, |
> | | sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, |
> | | NULL, rte_pktmbuf_init, NULL, rte_socket_id(), NO_FLAGS ); |
One possible explanation could be due to the memory footprint of the memory
pool. While the per-lcore mempool caches of buffers operate in a LIFO (i.e. stack)
manner, when mbufs are allocated on one core and freed on another, they pass
through a FIFO (i.e. ring) inside the mempool. This means that you iterate
through all buffers in the pool in this case, which can cause a slowdown if the
mempool size is bigger than your cache.
/Bruce
More information about the dev
mailing list