[dpdk-dev] [PATCH RFC 0/3] DPDK ethdev callback support

Bruce Richardson bruce.richardson at intel.com
Mon Dec 22 18:33:07 CET 2014


On Mon, Dec 22, 2014 at 06:02:53PM +0100, Thomas Monjalon wrote:
> Hi Bruce,
> 
> Callbacks, as hooks for applications, give more flexibility and are
> generally a good idea.
> In DPDK the main issue will be to avoid performance degradation.
> I see you use "unlikely" for callback branching.
> Could we reduce more the impact of this test by removing the queue array,
> i.e. having port-wide callbacks instead of per-queue callbacks?

I can give that a try, but I don't see it making much difference if any. The
main thing to avoid with branching is branch mis-prediction, which should not
be a problem here, as the user is not going to be adding or removing callbacks
between each RX and TX call, making the branches highly predictable - i.e. always
go the same way.
The reason for using per-queue callbacks is that I think we can do more with
it that way. For instance, if we want to do some additional processing or
calculations on only IP traffic, then we can use hardware offloads on most
NICs to steer the IP traffic to a separate queue and only apply the callbacks
to that queue. If the performance is the same, I think we should therefore keep
the per-queue version.

> 
> 2014-12-22 16:47, Bruce Richardson:
> > Future extensions: in future the ethdev library can be extended to provide
> > a standard set of callbacks for use by drivers. 
> 
> Having callbacks for drivers seems strange to me.
> If drivers need to accomplish some tasks, they do it by implementing an
> ethdev service. New services are declared for new needs.
> Callbacks are the reverse logic. Why should it be needed?

Typo, I meant for applications! Drivers don't need them indeed.

> 
> > For now this patch set is RFC and still needs additional work for creating
> > a remove function for callbacks and to add in additional testing code.
> > Since this adds in new code into the critical data path, I have run some
> > performance tests using testpmd with the ixgbe vector drivers (i.e. the
> > fastest, fast-path we have :-) ). Performance drops due to this patch
> > seems minimal to non-existant, rough tests on my system indicate a drop
> > of perhaps 1%.
> > 
> > All feedback welcome.
> 
> It would be good to have more performance tests with different configurations.

Sure, if you have ideals for specific tests you'd like to see I'll try and
get some numbers. What I did look as was the performance impact for this patch
without actually putting in place any callbacks, and the worst-case here is
hardly noticable. For an empty callback, i.e. the pure callback overhead, the
performance should still be in low single-digit percentages, but I'll test to 
confirm that. For other slower RX and TX paths, e.g. those using scattered
packets, or with TX offloads, the performance impact will be even less.

Regards,
/Bruce

> 
> Thanks
> -- 
> Thomas


More information about the dev mailing list