[dpdk-dev] [dpdk-stable] [PATCH v3 1/2] net/mlx5: optimize inline mbuf freeing

Slava Ovsiienko viacheslavo at nvidia.com
Thu Jan 28 10:14:53 CET 2021


Hi, Ferruh

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit at intel.com>
> Sent: Wednesday, January 27, 2021 14:45
> To: Slava Ovsiienko <viacheslavo at nvidia.com>; dev at dpdk.org
> Cc: Raslan Darawsheh <rasland at nvidia.com>; Matan Azrad
> <matan at nvidia.com>; Ori Kam <orika at nvidia.com>; NBU-Contact-Thomas
> Monjalon <thomas at monjalon.net>; Alexander Kozyrev
> <akozyrev at nvidia.com>; stable at dpdk.org
> Subject: Re: [dpdk-stable] [PATCH v3 1/2] net/mlx5: optimize inline mbuf
> freeing
> 
> On 1/22/2021 5:12 PM, Viacheslav Ovsiienko wrote:
> > The mlx5 PMD supports packet data inlining by pushing data to the
> > transmit descriptor. If packet is short enough and all data are
> > inline, the mbuf is not needed for data send anymore and can be freed.
> >
> > The mbuf free was performed in the most inner loop building the
> > transmit descriptors. This patch postpones the mbuf free transaction
> > to the tx_burst routine exit, optimizing the loop and allowing the
> > bulk freeing for the multiple mbufs in single pool API call.
> >
> > Cc: stable at dpdk.org
> >
> 
> Hi Slava,
> 
> This patch is optimization for inline mbufs, right, it is not a fix, should it be
> backported?
Not critical, but nice to have this small optimization in LTS.

> 
> cc'ed LTS maintainers.
> 
> I am dropping the stable to for now in the next-net, can add it later based on
> discussion result.

OK, let's consider this backporting in dedicated way, thank you.

With best regards, Slava


More information about the dev mailing list