[dpdk-dev] When are mbufs released back to the mempool?

Schumm, Ken ken.schumm at intel.com
Thu Dec 19 20:09:48 CET 2013


Hello Olivier,

Do you know what the reason is for the tx rings filling up and holding on to mbufs?

It seems they could be freed when the DMA xfer is acknowledged instead of waiting until the ring was full.

Thanks!
Ken Schumm

-----Original Message-----
From: Olivier MATZ [mailto:olivier.matz at 6wind.com] 
Sent: Wednesday, December 18, 2013 1:03 AM
To: Schumm, Ken
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] When are mbufs released back to the mempool?

Hello Ken,

On 12/17/2013 07:13 PM, Schumm, Ken wrote:
 > When running l2fwd the number of available mbufs returned by  > rte_mempool_count() starts at 7680 on an idle system.
 >
 > As traffic commences the count declines from 7680 to  > 5632 (expected).

You are right, some mbufs are kept at 2 places:

- in mempool per-core cache: as you noticed, each lcore has
   a cache to avoid a (more) costly access to the common pool.

- also, the mbufs stay in the hardware transmission ring of the
   NIC. Let's say the size of your hw ring is 512, it means that
   when transmitting the 513th mbuf, you will free the first mbuf
   given to your NIC. Therefore, (hw-tx-ring-size * nb-tx-queue)
   mbufs can be stored in tx hw rings.
   Of course, the same applies to rx rings, but it's easier to see
   it as they are filled when initializing the driver.

When choosing the number of mbufs, you need to take a value greater than (hw-rx-ring-size * nb-rx-queue) + (hw-tx-ring-size *
nb-tx-queue) + (nb-lcores * mbuf-pool-cache-size)

 > Is this also true of ring buffers?

No, if you talk about rte_ring, there is no cache in this structure.

Regards,
Olivier



More information about the dev mailing list