[dpdk-dev] [EXT] Re: [RFC 19.11 1/2] ethdev: make DPDK core functions non-inline

Jerin Jacob Kollanukkaran jerinj at marvell.com
Tue Jul 30 18:24:40 CEST 2019


> -----Original Message-----
> From: Bruce Richardson <bruce.richardson at intel.com>
> Sent: Tuesday, July 30, 2019 9:36 PM
> To: Jerin Jacob Kollanukkaran <jerinj at marvell.com>
> Cc: Marcin Zapolski <marcinx.a.zapolski at intel.com>; dev at dpdk.org
> Subject: [EXT] Re: [dpdk-dev] [RFC 19.11 1/2] ethdev: make DPDK core functions
> non-inline
> 
> On Tue, Jul 30, 2019 at 03:45:38PM +0000, Jerin Jacob Kollanukkaran wrote:
> > > -----Original Message-----
> > > From: Bruce Richardson <bruce.richardson at intel.com>
> > > Sent: Tuesday, July 30, 2019 9:02 PM
> > > To: Jerin Jacob Kollanukkaran <jerinj at marvell.com>
> > > Cc: Marcin Zapolski <marcinx.a.zapolski at intel.com>; dev at dpdk.org
> > > Subject: [EXT] Re: [dpdk-dev] [RFC 19.11 1/2] ethdev: make DPDK core
> > > functions non-inline
> > >
> > > --------------------------------------------------------------------
> > > -- On Tue, Jul 30, 2019 at 03:01:00PM +0000, Jerin Jacob
> > > Kollanukkaran wrote:
> > > > > -----Original Message----- From: dev <dev-bounces at dpdk.org> On
> > > > > Behalf Of Marcin Zapolski Sent: Tuesday, July 30, 2019 6:20 PM To:
> > > > > dev at dpdk.org Cc: Marcin Zapolski <marcinx.a.zapolski at intel.com>
> > > > > Subject: [dpdk-dev] [RFC 19.11 1/2] ethdev: make DPDK core
> > > > > functions
> > > > > non- inline
> > > > >
> > > > > Make rte_eth_rx_burst, rte_eth_tx_burst and other static inline
> > > > > ethdev functions not inline. They are referencing DPDK internal
> > > > > structures and inlining forces those structures to be exposed to
> > > > > user
> > > applications.
> > > > >
> > > > > In internal testing with i40e NICs a performance drop of about
> > > > > 2% was observed with testpmd.
> > > >
> > > > I tested on two class of arm64 machines(Highend and lowend) one
> > > > has 1.4% drop And other one has 3.6% drop.
> > > >
> > > This is with testpmd only right? I'd just point out that we need to
> > > remember that these numbers need to be scaled down appropriately for
> > > a realworld app where IO is only a (hopefully small) proportion of
> > > the packet processing budget. For example, I would expect the ~2%
> > > drop we saw in testpmd to correspond to <0.5% drop in something like OVS.
> >
> > I see it as bit different view, Cycles saved infrastructure layer,
> > cycles gained in application. So IMO it vary between end user
> > application need what kind of machine it runs.
> >
> Sure. My thinking more is that to get ABI compatibility involves some tradeoffs
> and spending one more cycle per-packet when an app workload is typically
> hundreds of cycles, I believe, is a small cost worth paying.

I agree. But, We need take only the cost, If it is really costs.I don't think, we don't
need two level of indirection.

> 
> > >
> > > > I second to not expose internal data structure to avoid ABI break.
> > > >
> > > > IMO, This patch has performance issue due to it is fixing it in
> > > > simple way.
> > > >
> > > > It is not worth two have function call overhead to call the driver
> > > > function.  Some thoughts below to reduce the performance impact
> > > > without exposing internal structures.
> > > >
> > > The big concern I have with what you propose is that would involve
> > > changing each and every ethdev driver in DPDK! I'd prefer to make
> > > sure that the impact of this change is actually felt in real-world
> > > apps before we start looking to make such updates across the DPDK
> codebase.
> >
> > I see those changes are NO BRAINER from driver POV. Once we add in one
> > driver, individual PMD Maintainer can update easily. I think, we can do it once
> for all.
> 
> Ok, if it's doable in one go then sure. The issue is that if even one driver is not
> updated we can't switch over, all have to effectively be done simultaneously. [It

I agree. But I think, we can give enough time for driver to migrate.
If it does not migrate in time. We can take necessary action to fix it.
If it is a no brainer changes then all drivers will do it. We cannot pay cost
Because one driver is not maintaining. I can commit to take care of ALL Marvell PMDs.

> would also make backporting fixes trickier, but I wouldn't be concerned about
> that particularly.]
> 
> Have you tried out making the changes to a driver or two, to see how large the
> delta is? [And to verify it doesn't affect performance]

I hope, The RFC author will take first stab, I can help in reviewing it.

> 
> > I am sure, you must aware of How hard is make 2% improvement in
> > driver. I can spend time in This NO brainer to get 2% improvement back. I
> prefer later.
> >
> The other alternative I see is to leave the inline functions there, just disabled by
> default, and put in a build-time option for reduced ABI compatibility. That way
> the standard-built packages are all ABI compatible, but for those who absolutely
> need max perf and are rolling-their-own-build to get it can disable that ABI
> compatibility.

But currently there no ABI break. Right? We should do it before we have one on those
structures. I think, we should cleanup the low hanging ones first and to fast
path structures in parallel.


> 
> /Bruce


More information about the dev mailing list