[dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven programming model

Nipun Gupta nipun.gupta at nxp.com
Fri Feb 3 07:38:47 CET 2017



> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
> Sent: Thursday, February 02, 2017 19:39
> To: Nipun Gupta <nipun.gupta at nxp.com>
> Cc: dev at dpdk.org; thomas.monjalon at 6wind.com;
> bruce.richardson at intel.com; Hemant Agrawal <hemant.agrawal at nxp.com>;
> gage.eads at intel.com; harry.van.haaren at intel.com
> Subject: Re: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> programming model
> 
> On Thu, Feb 02, 2017 at 11:18:52AM +0000, Nipun Gupta wrote:
> > Hi,
> >
> > I had a few queries/comments regarding the eventdev patches.
> >
> > Please see inline.
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob
> > > Sent: Wednesday, December 21, 2016 14:55
> > > To: dev at dpdk.org
> > > Cc: thomas.monjalon at 6wind.com; bruce.richardson at intel.com; Hemant
> > > Agrawal <hemant.agrawal at nxp.com>; gage.eads at intel.com;
> > > harry.van.haaren at intel.com; Jerin Jacob
> <jerin.jacob at caviumnetworks.com>
> > > Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> > > programming model
> > >
> > > In a polling model, lcores poll ethdev ports and associated
> > > rx queues directly to look for packet. In an event driven model,
> > > by contrast, lcores call the scheduler that selects packets for
> > > them based on programmer-specified criteria. Eventdev library
> > > adds support for event driven programming model, which offer
> > > applications automatic multicore scaling, dynamic load balancing,
> > > pipelining, packet ingress order maintenance and
> > > synchronization services to simplify application packet processing.
> > >
> > > By introducing event driven programming model, DPDK can support
> > > both polling and event driven programming models for packet processing,
> > > and applications are free to choose whatever model
> > > (or combination of the two) that best suits their needs.
> > >
> > > This patch adds the eventdev specification header file.
> > >
> > > Signed-off-by: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> > > Acked-by: Bruce Richardson <bruce.richardson at intel.com>
> > > ---
> > >  MAINTAINERS                        |    3 +
> > >  doc/api/doxy-api-index.md          |    1 +
> > >  doc/api/doxy-api.conf              |    1 +
> > >  lib/librte_eventdev/rte_eventdev.h | 1275
> > > ++++++++++++++++++++++++++++++++++++
> > >  4 files changed, 1280 insertions(+)
> > >  create mode 100644 lib/librte_eventdev/rte_eventdev.h
> >
> > <snip>
> >
> > > +
> > > +/**
> > > + * Event device information
> > > + */
> > > +struct rte_event_dev_info {
> > > +	const char *driver_name;	/**< Event driver name */
> > > +	struct rte_pci_device *pci_dev;	/**< PCI information */
> >
> > With 'rte_device' in place (rte_dev.h), should we not have 'rte_device' instead
> of 'rte_pci_device' here?
> 
> Yes. Please post a patch to fix this. As the time of merging to
> next-eventdev tree it was not the case.

Sure. I'll send a patch regarding this.

> 
> >
> > > + * The number of events dequeued is the number of scheduler contexts held
> by
> > > + * this port. These contexts are automatically released in the next
> > > + * rte_event_dequeue_burst() invocation, or invoking
> > > rte_event_enqueue_burst()
> > > + * with RTE_EVENT_OP_RELEASE operation can be used to release the
> > > + * contexts early.
> > > + *
> > > + * @param dev_id
> > > + *   The identifier of the device.
> > > + * @param port_id
> > > + *   The identifier of the event port.
> > > + * @param[out] ev
> > > + *   Points to an array of *nb_events* objects of type *rte_event* structure
> > > + *   for output to be populated with the dequeued event objects.
> > > + * @param nb_events
> > > + *   The maximum number of event objects to dequeue, typically number of
> > > + *   rte_event_port_dequeue_depth() available for this port.
> > > + *
> > > + * @param timeout_ticks
> > > + *   - 0 no-wait, returns immediately if there is no event.
> > > + *   - >0 wait for the event, if the device is configured with
> > > + *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will
> > > wait until
> > > + *   the event available or *timeout_ticks* time.
> >
> > Just for understanding - Is expectation that rte_event_dequeue_burst() will
> wait till timeout
> > unless requested number of events (nb_events) are not received on the event
> port?
> 
> Yes. If you need any change then a send RFC patch for the header file
> change.
> 
> >
> > > + *   if the device is not configured with
> > > RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
> > > + *   then this function will wait until the event available or
> > > + *   *dequeue_timeout_ns* ns which was previously supplied to
> > > + *   rte_event_dev_configure()
> > > + *
> > > + * @return
> > > + * The number of event objects actually dequeued from the port. The return
> > > + * value can be less than the value of the *nb_events* parameter when the
> > > + * event port's queue is not full.
> > > + *
> > > + * @see rte_event_port_dequeue_depth()
> > > + */
> > > +uint16_t
> > > +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event
> > > ev[],
> > > +			uint16_t nb_events, uint64_t timeout_ticks);
> > > +
> >
> > <Snip>
> >
> > Regards,
> > Nipun


More information about the dev mailing list