[dpdk-dev] [RFC] [PATCH v2] libeventdev: event driven programming model framework for DPDK

Jerin Jacob jerin.jacob at caviumnetworks.com
Fri Oct 14 11:26:09 CEST 2016


On Thu, Oct 13, 2016 at 11:14:38PM -0500, Bill Fischofer wrote:
> Hi Jerin,

Hi Bill,

Thanks for the review.

[snip]
> > + * If the device init operation is successful, the correspondence between
> > + * the device identifier assigned to the new device and its associated
> > + * *rte_event_dev* structure is effectively registered.
> > + * Otherwise, both the *rte_event_dev* structure and the device
> > identifier are
> > + * freed.
> > + *
> > + * The functions exported by the application Event API to setup a device
> > + * designated by its device identifier must be invoked in the following
> > order:
> > + *     - rte_event_dev_configure()
> > + *     - rte_event_queue_setup()
> > + *     - rte_event_port_setup()
> > + *     - rte_event_port_link()
> > + *     - rte_event_dev_start()
> > + *
> > + * Then, the application can invoke, in any order, the functions
> > + * exported by the Event API to schedule events, dequeue events, enqueue
> > events,
> > + * change event queue(s) to event port [un]link establishment and so on.
> > + *
> > + * Application may use rte_event_[queue/port]_default_conf_get() to get
> > the
> > + * default configuration to set up an event queue or event port by
> > + * overriding few default values.
> > + *
> > + * If the application wants to change the configuration (i.e. call
> > + * rte_event_dev_configure(), rte_event_queue_setup(), or
> > + * rte_event_port_setup()), it must call rte_event_dev_stop() first to
> > stop the
> > + * device and then do the reconfiguration before calling
> > rte_event_dev_start()
> > + * again. The schedule, enqueue and dequeue functions should not be
> > invoked
> > + * when the device is stopped.
> >
> 
> Given this requirement, the question is what happens to events that are "in
> flight" at the time rte_event_dev_stop() is called? Is stop an asynchronous
> operation that quiesces the event _dev and allows in-flight events to drain
> from queues/ports prior to fully stopping, or is some sort of separate
> explicit quiesce mechanism required? If stop is synchronous and simply
> halts the event_dev, then how is an application to know if subsequent
> configure/setup calls would leave these pending events with no place to
> stand?
>

>From an application API perspective rte_event_dev_stop() is a synchronous function.
If the stop has been called for re-configuring the number of queues, ports etc of
the device, then "in flight" entry preservation will be implementation defined.
else "in flight" entries will be preserved.

[snip]
 
> > +extern int
> > +rte_event_dev_socket_id(uint8_t dev_id);
> > +
> > +/* Event device capability bitmap flags */
> > +#define RTE_EVENT_DEV_CAP_QUEUE_QOS        (1 << 0)
> > +/**< Event scheduling prioritization is based on the priority associated
> > with
> > + *  each event queue.
> > + *
> > + *  \see rte_event_queue_setup(), RTE_EVENT_QUEUE_PRIORITY_NORMAL
> > + */
> > +#define RTE_EVENT_DEV_CAP_EVENT_QOS        (1 << 1)
> > +/**< Event scheduling prioritization is based on the priority associated
> > with
> > + *  each event. Priority of each event is supplied in *rte_event*
> > structure
> > + *  on each enqueue operation.
> > + *
> > + *  \see rte_event_enqueue()
> > + */
> > +
> > +/**
> > + * Event device information
> > + */
> > +struct rte_event_dev_info {
> > +       const char *driver_name;        /**< Event driver name */
> > +       struct rte_pci_device *pci_dev; /**< PCI information */
> > +       uint32_t min_dequeue_wait_ns;
> > +       /**< Minimum supported global dequeue wait delay(ns) by this
> > device */
> > +       uint32_t max_dequeue_wait_ns;
> > +       /**< Maximum supported global dequeue wait delay(ns) by this
> > device */
> > +       uint32_t dequeue_wait_ns;
> >
> 
> Am I reading this correctly that there is no way to support an indefinite
> waiting capability? Or is this just saying that if a timed wait is
> performed there are min/max limits for the wait duration?

Application can wait indefinite if required. see
RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT configuration option.

Trivial application may not need different wait values on each dequeue.This is
a performance optimization opportunity for implementation.

> 
> 
> > +       /**< Configured global dequeue wait delay(ns) for this device */
> > +       uint8_t max_event_queues;
> > +       /**< Maximum event_queues supported by this device */
> > +       uint32_t max_event_queue_flows;
> > +       /**< Maximum supported flows in an event queue by this device*/
> > +       uint8_t max_event_queue_priority_levels;
> > +       /**< Maximum number of event queue priority levels by this device.
> > +        * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
> > +        */
> > +       uint8_t nb_event_queues;
> > +       /**< Configured number of event queues for this device */
> >
> 
> Is 256 a sufficient number of queues? While various SoCs may have limits,
> why impose such a small limit architecturally?

Each event queue potentially can support millions of flows.That way, 256 may not be a
small limit.The reason to choose queue_id as 8bit to hold all the attribute
of "struct rte_event" in 128bit. So that SIMD optimization is possible
in the implementation.

> > +       /**< The maximum number of active flows this queue can track at any
> > +        * given time. The value must be in the range of
> > +        * [1 - max_event_queue_flows)] which previously supplied
> > +        * to rte_event_dev_configure().
> > +        */
> > +       uint32_t nb_atomic_order_sequences;
> > +       /**< The maximum number of outstanding events waiting to be
> > (egress-)
> > +        * reordered by this queue. In other words, the number of entries
> > in
> > +        * this queue’s reorder buffer.The value must be in the range of
> > +        * [1 - max_event_queue_flows)] which previously supplied
> > +        * to rte_event_dev_configure().
> >
> 
> What happens if this limit is exceeded? While atomic limits are bounded by
> the number of lcores, the same cannot be said for ordered queues.
> Presumably the queue would refuse further dequeues once this limit is
> reached until pending reorders are resolved to permit continued processing?
> If so that should be stated explicitly.

OK. I will update details.

> 
> 
> > + *
> > + *
> > + * The device start step is the last one and consists of setting the event
> > + * queues to start accepting the events and schedules to event ports.
> > + *
> > + * On success, all basic functions exported by the API (event enqueue,
> > + * event dequeue and so on) can be invoked.
> > + *
> > + * @param dev_id
> > + *   Event device identifier
> > + * @return
> > + *   - 0: Success, device started.
> > + *   - <0: Error code of the driver device start function.
> > + */
> > +extern int
> > +rte_event_dev_start(uint8_t dev_id);
> > +
> > +/**
> > + * Stop an event device. The device can be restarted with a call to
> > + * rte_event_dev_start()
> > + *
> > + * @param dev_id
> > + *   Event device identifier.
> > + */
> > +extern void
> > +rte_event_dev_stop(uint8_t dev_id);
> >
> 
> Having this be a void function implies this function cannot fail. Is that
> assumption always correct?

Yes. Subsequent rte_event_dev_start() can return error if the implementation
really have some critical issues on starting the device.

> 
> 
> > +
> > +/**
> > + * Close an event device. The device cannot be restarted!
> > + *
> > + * @param dev_id
> > + *   Event device identifier
> > + *
> > + * @return
> > + *  - 0 on successfully closing device
> > + *  - <0 on failure to close device
> > + */
> > +extern int


More information about the dev mailing list