[dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop

Nipun Gupta nipun.gupta at nxp.com
Mon Sep 30 09:46:42 CEST 2019



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk at gmail.com>
> Sent: Monday, September 30, 2019 12:08 PM
> To: Nipun Gupta <nipun.gupta at nxp.com>
> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula at marvell.com>; Jerin Jacob
> Kollanukkaran <jerinj at marvell.com>; bruce.richardson at intel.com; Akhil
> Goyal <akhil.goyal at nxp.com>; Marko Kovacevic
> <marko.kovacevic at intel.com>; Ori Kam <orika at mellanox.com>; Radu
> Nicolau <radu.nicolau at intel.com>; Tomasz Kantecki
> <tomasz.kantecki at intel.com>; Sunil Kumar Kori <skori at marvell.com>;
> dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
> eventdev main loop
> 
> On Mon, Sep 30, 2019 at 11:08 AM Nipun Gupta <nipun.gupta at nxp.com>
> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Pavan Nikhilesh Bhagavatula <pbhagavatula at marvell.com>
> > > Sent: Friday, September 27, 2019 8:05 PM
> > > To: Nipun Gupta <nipun.gupta at nxp.com>; Jerin Jacob Kollanukkaran
> > > <jerinj at marvell.com>; bruce.richardson at intel.com; Akhil Goyal
> > > <akhil.goyal at nxp.com>; Marko Kovacevic <marko.kovacevic at intel.com>;
> > > Ori Kam <orika at mellanox.com>; Radu Nicolau <radu.nicolau at intel.com>;
> > > Tomasz Kantecki <tomasz.kantecki at intel.com>; Sunil Kumar Kori
> > > <skori at marvell.com>
> > > Cc: dev at dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
> > > eventdev main loop
> > >
> > > >>
> > > >> From: Pavan Nikhilesh <pbhagavatula at marvell.com>
> > > >>
> > > >> Add event dev main loop based on enabled l2fwd options and
> > > >eventdev
> > > >> capabilities.
> > > >>
> > > >> Signed-off-by: Pavan Nikhilesh <pbhagavatula at marvell.com>
> > > >> ---
> > > >
> > > ><snip>
> > > >
> > > >> +          if (flags & L2FWD_EVENT_TX_DIRECT) {
> > > >> +                  rte_event_eth_tx_adapter_txq_set(mbuf, 0);
> > > >> +                  while
> > > >> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
> > > >> +                                                          port_id,
> > > >> +                                                          &ev, 1)
> > > >&&
> > > >> +                                  !*done)
> > > >> +                          ;
> > > >> +          }
> > > >
> > > >In the TX direct mode we can send packets directly to the ethernet
> > > >device using ethdev
> > > >API's. This will save unnecessary indirections and event unfolds within
> > > >the driver.
> > >
> > > How would we guarantee atomicity of access to Tx queues? Between
> cores
> > > as we can only use one Tx queue.
> > > Also, if SCHED_TYPE is ORDERED how would we guarantee flow ordering?
> > > The capability of MT_LOCKFREE and flow ordering is abstracted through `
> > > rte_event_eth_tx_adapter_enqueue `.
> >
> > I understand your objective here. Probably in your case the DIRECT is
> equivalent
> > to giving the packet to the scheduler, which will pass on the packet to the
> destined device.
> > On NXP platform, DIRECT implies sending the packet directly to the device
> (eth/crypto),
> > and scheduler will internally pitch in.
> > Here we will need another option to send it directly to the device.
> > We can set up a call to discuss the same, or send patch regarding this to you
> to incorporate
> > the same in your series.
> 
> Yes. Sending the patch will make us understand better.
> 
> Currently, We have two different means for abstracting Tx adapter fast
> path changes,
> a) SINGLE LINK QUEUE
> b) rte_event_eth_tx_adapter_enqueue()
> 
> Could you please share why any of the above schemes do not work for NXP
> HW?
> If there is no additional functionality in
> rte_event_eth_tx_adapter_enqueue(), you could
> simply call direct ethdev tx burst function pointer to make
> abstraction  intact to avoid
> one more code flow in the fast path.
> 
> If I guess it right since NXP HW supports MT_LOCKFREE and only atomic, due
> to
> that, calling eth_dev_tx_burst will be sufficient. But abstracting
> over rte_event_eth_tx_adapter_enqueue()
> makes application life easy. You can call the low level DPPA2 Tx function in
> rte_event_eth_tx_adapter_enqueue() to avoid any performance
> impact(We
> are doing the same).

Yes, that’s correct regarding our H/W capability.
Agree that the application will become complex by adding more code flow,
but calling Tx functions internally may lead to additional CPU cycles.
Give us a couple of days to analyze the performance impact, and as you also say, I too
don't think it would be much. We should be able to manage it in within our driver.

> 
> 
> >
> > >
> > > @see examples/eventdev_pipeline and app/test-
> eventdev/test_pipeline_*.
> >
> > Yes, we are aware of that, They are one way of representing, how to build
> a complete eventdev pipeline.
> > They don't work on NXP HW.
> > We plan to send patches for them to fix them for NXP HW soon.
> >
> > Regards,
> > Nipun
> >
> > >
> > > >
> > > >> +
> > > >> +          if (timer_period > 0)
> > > >> +                  __atomic_fetch_add(&eventdev_rsrc-
> > > >>stats[mbuf-
> > > >> >port].tx,
> > > >> +                                     1, __ATOMIC_RELAXED);
> > > >> +  }
> > > >> +}


More information about the dev mailing list