[dpdk-dev] [PATCH] eventdev: flag to identify same destined packets enqueue

Jerin Jacob jerinjacobk at gmail.com
Tue Oct 22 10:45:25 CEST 2019


On Mon, Oct 21, 2019 at 5:05 PM Rao, Nikhil <nikhil.rao at intel.com> wrote:
>
> > -----Original Message-----
> > From: Jerin Jacob [mailto:jerinjacobk at gmail.com]
> > Sent: Thursday, October 3, 2019 3:57 PM
> > To: Hemant Agrawal <hemant.agrawal at nxp.com>
> > Cc: Rao, Nikhil <nikhil.rao at intel.com>; Nipun Gupta <nipun.gupta at nxp.com>;
> > Jerin Jacob <jerinj at marvell.com>; dpdk-dev <dev at dpdk.org>; Pavan Nikhilesh
> > <pbhagavatula at marvell.com>; Sunil Kumar Kori <skori at marvell.com>;
> > Richardson, Bruce <bruce.richardson at intel.com>; Kovacevic, Marko
> > <marko.kovacevic at intel.com>; Ori Kam <orika at mellanox.com>; Nicolau, Radu
> > <radu.nicolau at intel.com>; Kantecki, Tomasz <tomasz.kantecki at intel.com>;
> > Van Haaren, Harry <harry.van.haaren at intel.com>
> > Subject: Re: [dpdk-dev] [PATCH] eventdev: flag to identify same destined
> > packets enqueue
> >
> </snip>
>
> > > > > But I am not able to recollect, Why Nikhil would like to use the
> > > > > separate functions. Nikhil could you remind us why
> > > > > rte_event_eth_tx_adapter_enqueue() can not be used for sending the
> > > > > packet for SW Tx adapter.
> > > > >
> > > > [Nikhil] The goal was to keep the workers using the loop below.
> > > >
> > > > while (1) {
> > > >         rte_event_dequeue_burst(...);
> > > >         (event processing)
> > > >         rte_event_enqueue_burst(...); }
> >
> > We do have specialized functions for specific enqueue use case like
> > rte_event_enqueue_new_burst() or
> > rte_event_enqueue_forward_burst() to avoid any performance impact.
> >
> > Since PMD agruments are same for rte_event_enqueue_burst() and
> > rte_event_eth_tx_adapter_enqueue()
> > assigning simple function pointer assignment to
> > rte_event_eth_tx_adapter_enqueue as dev->txa_enqueue =
> > dev->enqueue_burst
> > would have worked to have same Tx function across all platfroms without
> > peformance overhead.
> > Offcouse I understand, Slow path direct event enqueue assigment needs
> > different treatment.
> >
> >
> > ie in fastpath.
> >
> > while (1) {
> >        rte_event_dequeue_burst(...);
> >       if (tx_stage)
> >         rte_event_eth_tx_adapter_enqueue()...
> > }
> >
> > What do you say?
> >
>
> Sorry missed this question previously - Unless I have misunderstood your email, the event processing stage would have if conditions for each of the stages (or minimally the tx stage), no disagreement on that, the only difference would be set up  of the event[] arrays that are sent to rte_event_enqueue_burst() and rte_event_eth_tx_adapter_enqueue() resulting in an additional call to rte_event_enqueue_burst(). If that’s true, since the abstraction has a cost to it,  should we be adding it ?

It there is a cost then we should not be adding it.
I think, the following scheme can avoid the cost by adding the
following in a _slow path_ as the prototype of the driver API is the
same.

dev->txa_enqueue = dev->enqueue_burst;

>
> Nikhil


More information about the dev mailing list