[dpdk-dev] [RFC 19.11 1/2] ethdev: make DPDK core functions non-inline

Jerin Jacob Kollanukkaran jerinj at marvell.com
Tue Jul 30 17:45:38 CEST 2019


> -----Original Message-----
> From: Bruce Richardson <bruce.richardson at intel.com>
> Sent: Tuesday, July 30, 2019 9:02 PM
> To: Jerin Jacob Kollanukkaran <jerinj at marvell.com>
> Cc: Marcin Zapolski <marcinx.a.zapolski at intel.com>; dev at dpdk.org
> Subject: [EXT] Re: [dpdk-dev] [RFC 19.11 1/2] ethdev: make DPDK core functions
> non-inline
> 
> ----------------------------------------------------------------------
> On Tue, Jul 30, 2019 at 03:01:00PM +0000, Jerin Jacob Kollanukkaran wrote:
> > > -----Original Message----- From: dev <dev-bounces at dpdk.org> On
> > > Behalf Of Marcin Zapolski Sent: Tuesday, July 30, 2019 6:20 PM To:
> > > dev at dpdk.org Cc: Marcin Zapolski <marcinx.a.zapolski at intel.com>
> > > Subject: [dpdk-dev] [RFC 19.11 1/2] ethdev: make DPDK core functions
> > > non- inline
> > >
> > > Make rte_eth_rx_burst, rte_eth_tx_burst and other static inline
> > > ethdev functions not inline. They are referencing DPDK internal
> > > structures and inlining forces those structures to be exposed to user
> applications.
> > >
> > > In internal testing with i40e NICs a performance drop of about 2%
> > > was observed with testpmd.
> >
> > I tested on two class of arm64 machines(Highend and lowend) one has
> > 1.4% drop And other one has 3.6% drop.
> >
> This is with testpmd only right? I'd just point out that we need to remember that
> these numbers need to be scaled down appropriately for a realworld app where
> IO is only a (hopefully small) proportion of the packet processing budget. For
> example, I would expect the ~2% drop we saw in testpmd to correspond to
> <0.5% drop in something like OVS.

I see it as bit different view, Cycles saved infrastructure layer, cycles gained in
application. So IMO it vary between end user application need what kind of
machine it runs.

> 
> > I second to not expose internal data structure to avoid ABI break.
> >
> > IMO, This patch has performance issue due to it is fixing it in simple
> > way.
> >
> > It is not worth two have function call overhead to call the driver
> > function.  Some thoughts below to reduce the performance impact
> > without exposing internal structures.
> >
> The big concern I have with what you propose is that would involve changing
> each and every ethdev driver in DPDK! I'd prefer to make sure that the impact of
> this change is actually felt in real-world apps before we start looking to make
> such updates across the DPDK codebase.

I see those changes are NO BRAINER from driver POV. Once we add in one driver, individual
PMD Maintainer can update easily. I think, we can do it once for all.
I am sure, you must aware of How hard is make 2% improvement in driver. I can spend time in
This NO brainer to get 2% improvement back. I prefer later.


> 
> > And I think, We need to follow the similar mechanism for cryptodev,
> > Eventdev, rawdev Etc so bring the common scheme to address this semantics
> will be use full.
> >
> Agreed.
> 
> Regards,
> /Bruce


More information about the dev mailing list