[dpdk-dev] [PATCH 1/6] cxgbe: Optimize forwarding performance for 40G

Ananyev, Konstantin konstantin.ananyev at intel.com
Mon Oct 5 16:09:27 CEST 2015


Hi Rahul,

> -----Original Message-----
> From: Rahul Lakkireddy [mailto:rahul.lakkireddy at chelsio.com]
> Sent: Monday, October 05, 2015 1:42 PM
> To: Ananyev, Konstantin
> Cc: Aaron Conole; dev at dpdk.org; Felix Marti; Kumar A S; Nirranjan Kirubaharan
> Subject: Re: [dpdk-dev] [PATCH 1/6] cxgbe: Optimize forwarding performance for 40G
> 
> Hi Konstantin,
> 
> On Monday, October 10/05/15, 2015 at 04:46:40 -0700, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Rahul Lakkireddy
> > > Sent: Monday, October 05, 2015 11:06 AM
> > > To: Aaron Conole
> > > Cc: dev at dpdk.org; Felix Marti; Kumar A S; Nirranjan Kirubaharan
> > > Subject: Re: [dpdk-dev] [PATCH 1/6] cxgbe: Optimize forwarding performance for 40G
> > >
> > > Hi Aaron,
> > >
> > > On Friday, October 10/02/15, 2015 at 14:48:28 -0700, Aaron Conole wrote:
> > > > Hi Rahul,
> > > >
> > > > Rahul Lakkireddy <rahul.lakkireddy at chelsio.com> writes:
> > > >
> > > > > Update sge initialization with respect to free-list manager configuration
> > > > > and ingress arbiter. Also update refill logic to refill mbufs only after
> > > > > a certain threshold for rx.  Optimize tx packet prefetch and free.
> > > > <<snip>>
> > > > >  			for (i = 0; i < sd->coalesce.idx; i++) {
> > > > > -				rte_pktmbuf_free(sd->coalesce.mbuf[i]);
> > > > > +				struct rte_mbuf *tmp = sd->coalesce.mbuf[i];
> > > > > +
> > > > > +				do {
> > > > > +					struct rte_mbuf *next = tmp->next;
> > > > > +
> > > > > +					rte_pktmbuf_free_seg(tmp);
> > > > > +					tmp = next;
> > > > > +				} while (tmp);
> > > > >  				sd->coalesce.mbuf[i] = NULL;
> > > > Pardon my ignorance here, but rte_pktmbuf_free does this work. I can't
> > > > actually see much difference between your rewrite of this block, and
> > > > the implementation of rte_pktmbuf_free() (apart from moving your branch
> > > > to the end of the function). Did your microbenchmarking really show this
> > > > as an improvement?
> > > >
> > > > Thanks for your time,
> > > > Aaron
> > >
> > > rte_pktmbuf_free calls rte_mbuf_sanity_check which does a lot of
> > > checks.
> >
> > Only when RTE_LIBRTE_MBUF_DEBUG is enabled in your config.
> > By default it is switched off.
> 
> Right. I clearly missed this.
> I am running with default config only btw.
> 
> >
> > > This additional check seems redundant for single segment
> > > packets since rte_pktmbuf_free_seg also performs rte_mbuf_sanity_check.
> > >
> > > Several PMDs already prefer to use rte_pktmbuf_free_seg directly over
> > > rte_pktmbuf_free as it is faster.
> >
> > Other PMDs use rte_pktmbuf_free_seg() as each TD has an associated
> > with it segment. So as HW is done with the TD, SW frees associated segment.
> > In your case I don't see any point in re-implementing rte_pktmbuf_free() manually,
> > and I don't think it would be any faster.
> >
> > Konstantin
> 
> As I mentioned below, I am clearly seeing a difference of 1 Mpps. And 1
> Mpps is not a small difference IMHO.

Agree with you here - it is a significant difference.

> 
> When running l3fwd with 8 queues, I also collected a perf report.
> When using rte_pktmbuf_free, I see that it eats up around 6% cpu as
> below in perf top report:-
> --------------------
> 32.00%  l3fwd                        [.] cxgbe_poll
> 22.25%  l3fwd                        [.] t4_eth_xmit
> 20.30%  l3fwd                        [.] main_loop
>  6.77%  l3fwd                        [.] rte_pktmbuf_free
>  4.86%  l3fwd                        [.] refill_fl_usembufs
>  2.00%  l3fwd                        [.] write_sgl
> .....
> --------------------
> 
> While, when using rte_pktmbuf_free_seg directly, I don't see above
> problem. perf top report now comes as:-
> -------------------
> 33.36%  l3fwd                        [.] cxgbe_poll
> 32.69%  l3fwd                        [.] t4_eth_xmit
> 19.05%  l3fwd                        [.] main_loop
>  5.21%  l3fwd                        [.] refill_fl_usembufs
>  2.40%  l3fwd                        [.] write_sgl
> ....
> -------------------

I don't think these 6% disappeared anywhere.
As I can see, now t4_eth_xmit() increased by roughly same amount
(you still have same job to do).
To me it looks like in that case compiler didn't really inline rte_pktmbuf_free().
Wonder can you add 'always_inline' attribute to the  rte_pktmbuf_free(),
and see would it make any difference?

Konstantin 

> 
> I obviously missed the debug flag for rte_mbuf_sanity_check.
> However, there is a clear difference of 1 Mpps. I don't know if its the
> change between while construct used in rte_pktmbuf_free and the
> do..while construct that I used - is making the difference.
> 
> 
> >
> > >
> > > The forwarding perf. improvement with only this particular block is
> > > around 1 Mpps for 64B packets when using l3fwd with 8 queues.
> > >
> > > Thanks,
> > > Rahul


More information about the dev mailing list