[PATCH] net/intel: do not bypass mbuf lib for mbuf fast-free
Bruce Richardson
bruce.richardson at intel.com
Tue Apr 21 13:00:47 CEST 2026
On Tue, Apr 21, 2026 at 11:34:46AM +0100, Bruce Richardson wrote:
> On Sat, Apr 18, 2026 at 09:56:38AM +0000, Morten Brørup wrote:
> > Freeing mbufs directly into the mempool meant that mbuf instrumentation,
> > including mbuf history marking, was omitted.
> > The mbufs are now freed via the rte_mbuf_raw_free_bulk() function instead.
> >
> > Added a static_assert to ensure that type casting the array of struct
> > ci_tx_entry_vec to an array of rte_mbuf pointers remains sound.
> >
> > Performance note:
> > The (n & 31) condition was not removed.
> > For the default tx_rs_thresh value (32), the condition will be true.
> > And due to inlining, the rte_mbuf_raw_free_bulk() ends up in an
> > rte_memcpy(), where the optimizer takes advantage of knowing that the
> > lower bits are not set.
> > This should compensate somewhat for removing the handcoded optimization of
> > copying in chunks of 32 mbufs.
> >
> > Signed-off-by: Morten Brørup <mb at smartsharesystems.com>
> > ---
>
> Ran a very quick perf test using a couple of 100G ports, no regression
> seen with this patch, maybe even a slight perf bump. Therefore:
>
> Acked-by: Bruce Richardson <bruce.richardson at intel.com>
> Tested-by: Bruce Richardson <bruce.richardson at intel.com>
>
Applied to dpdk-next-net-intel.
Thanks,
/Bruce
More information about the dev
mailing list