[PATCH v3 01/11] eventdev: improve doxygen introduction text
Mattias Rönnblom
hofors at lysator.liu.se
Thu Feb 8 10:50:06 CET 2024
On 2024-02-07 11:14, Jerin Jacob wrote:
> On Fri, Feb 2, 2024 at 7:29 PM Bruce Richardson
> <bruce.richardson at intel.com> wrote:
>>
>> Make some textual improvements to the introduction to eventdev and event
>> devices in the eventdev header file. This text appears in the doxygen
>> output for the header file, and introduces the key concepts, for
>> example: events, event devices, queues, ports and scheduling.
>>
>> This patch makes the following improvements:
>> * small textual fixups, e.g. correcting use of singular/plural
>> * rewrites of some sentences to improve clarity
>> * using doxygen markdown to split the whole large block up into
>> sections, thereby making it easier to read.
>>
>> No large-scale changes are made, and blocks are not reordered
>>
>> Signed-off-by: Bruce Richardson <bruce.richardson at intel.com>
>
> Thanks Bruce, While you are cleaning up, Please add following or
> similar change to fix for not properly
> parsing the struct rte_event_vector. i.e it is coming as global
> variables in html files.
>
> l[dpdk.org] $ git diff
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index e31c927905..ce4a195a8f 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1309,9 +1309,9 @@ struct rte_event_vector {
> */
> struct {
> uint16_t port;
> - /* Ethernet device port id. */
> + /**< Ethernet device port id. */
> uint16_t queue;
> - /* Ethernet device queue id. */
> + /**< Ethernet device queue id. */
> };
> };
> /**< Union to hold common attributes of the vector array. */
> @@ -1340,7 +1340,11 @@ struct rte_event_vector {
> * vector array can be an array of mbufs or pointers or opaque u64
> * values.
> */
> +#ifndef __DOXYGEN__
> } __rte_aligned(16);
> +#else
> +};
> +#endif
>
> /* Scheduler type definitions */
> #define RTE_SCHED_TYPE_ORDERED 0
>
>>
>> ---
>> V3: reworked following feedback from Mattias
>> ---
>> lib/eventdev/rte_eventdev.h | 132 ++++++++++++++++++++++--------------
>> 1 file changed, 81 insertions(+), 51 deletions(-)
>>
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index ec9b02455d..a741832e8e 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -12,25 +12,33 @@
>> * @file
>> *
>> * RTE Event Device API
>> + * ====================
>> *
>> - * In a polling model, lcores poll ethdev ports and associated rx queues
>> - * directly to look for packet. In an event driven model, by contrast, lcores
>> - * call the scheduler that selects packets for them based on programmer
>> - * specified criteria. Eventdev library adds support for event driven
>> - * programming model, which offer applications automatic multicore scaling,
>> - * dynamic load balancing, pipelining, packet ingress order maintenance and
>> - * synchronization services to simplify application packet processing.
>> + * In a traditional run-to-completion application model, lcores pick up packets
>
> Can we keep it is as poll mode instead of run-to-completion as event mode also
> supports run to completion by having dequuee() and then Tx.
>
A "traditional" DPDK app is both polling and run-to-completion. You
could always add "polling" somewhere, but "run-to-completion" in that
context serves a purpose, imo.
A single-stage eventdev-based pipeline will also process packets in a
run-to-completion fashion. In such a scenario, the difference between
eventdev and the "tradition" lies in the (ingress-only) load balancing
mechanism used (which the below note on the "traditional" use of RSS
indicates).
>> + * from Ethdev ports and associated RX queues, run the packet processing to completion,
>> + * and enqueue the completed packets to a TX queue. NIC-level receive-side scaling (RSS)
>> + * may be used to balance the load across multiple CPU cores.
>> + *
>> + * In contrast, in an event-driver model, as supported by this "eventdev" library,
>> + * incoming packets are fed into an event device, which schedules those packets across
>
> packets -> events. We may need to bring in Rx adapter if the event is packet.
>
>> + * the available lcores, in accordance with its configuration.
>> + * This event-driven programming model offers applications automatic multicore scaling,
>> + * dynamic load balancing, pipelining, packet order maintenance, synchronization,
>> + * and prioritization/quality of service.
>> *
>> * The Event Device API is composed of two parts:
>> *
>> * - The application-oriented Event API that includes functions to setup
>> * an event device (configure it, setup its queues, ports and start it), to
>> - * establish the link between queues to port and to receive events, and so on.
>> + * establish the links between queues and ports to receive events, and so on.
>> *
>> * - The driver-oriented Event API that exports a function allowing
>> - * an event poll Mode Driver (PMD) to simultaneously register itself as
>> + * an event poll Mode Driver (PMD) to register itself as
>> * an event device driver.
>> *
>> + * Application-oriented Event API
>> + * ------------------------------
>> + *
>> * Event device components:
>> *
>> * +-----------------+
>> @@ -75,27 +83,39 @@
>> * | |
>> * +-----------------------------------------------------------+
>> *
>> - * Event device: A hardware or software-based event scheduler.
>> + * **Event device**: A hardware or software-based event scheduler.
>> *
>> - * Event: A unit of scheduling that encapsulates a packet or other datatype
>> - * like SW generated event from the CPU, Crypto work completion notification,
>> - * Timer expiry event notification etc as well as metadata.
>> - * The metadata includes flow ID, scheduling type, event priority, event_type,
>> - * sub_event_type etc.
>> + * **Event**: Represents an item of work and is the smallest unit of scheduling.
>> + * An event carries metadata, such as queue ID, scheduling type, and event priority,
>> + * and data such as one or more packets or other kinds of buffers.
>> + * Some examples of events are:
>> + * - a software-generated item of work originating from a lcore,
>
> lcore.
>
>> + * perhaps carrying a packet to be processed,
>
> processed.
>
>> + * - a crypto work completion notification
>
> notification.
>
>> + * - a timer expiry notification.
>> *
>> - * Event queue: A queue containing events that are scheduled by the event dev.
>> + * **Event queue**: A queue containing events that are scheduled by the event device.
>
> Shouldn't we add "to be" or so?
> i.e
> A queue containing events that are to be scheduled by the event device.
>
>> * An event queue contains events of different flows associated with scheduling
>> * types, such as atomic, ordered, or parallel.
>> + * Each event given to an event device must have a valid event queue id field in the metadata,
>> + * to specify on which event queue in the device the event must be placed,
>> + * for later scheduling.
>> *
>> - * Event port: An application's interface into the event dev for enqueue and
>> + * **Event port**: An application's interface into the event dev for enqueue and
>> * dequeue operations. Each event port can be linked with one or more
>> * event queues for dequeue operations.
>> - *
>> - * By default, all the functions of the Event Device API exported by a PMD
>> - * are lock-free functions which assume to not be invoked in parallel on
>> - * different logical cores to work on the same target object. For instance,
>> - * the dequeue function of a PMD cannot be invoked in parallel on two logical
>> - * cores to operates on same event port. Of course, this function
>> + * Enqueue and dequeue from a port is not thread-safe, and the expected use-case is
>> + * that each port is polled by only a single lcore. [If this is not the case,
>> + * a suitable synchronization mechanism should be used to prevent simultaneous
>> + * access from multiple lcores.]
>> + * To schedule events to an lcore, the event device will schedule them to the event port(s)
>> + * being polled by that lcore.
>> + *
>> + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
>> + * are non-thread-safe functions, which must not be invoked on the same object in parallel on
>> + * different logical cores.
>> + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
>> + * cores to operate on same event port. Of course, this function
>> * can be invoked in parallel by different logical cores on different ports.
>> * It is the responsibility of the upper level application to enforce this rule.
>> *
>> @@ -107,22 +127,19 @@
>> *
>> * Event devices are dynamically registered during the PCI/SoC device probing
>> * phase performed at EAL initialization time.
>> - * When an Event device is being probed, a *rte_event_dev* structure and
>> - * a new device identifier are allocated for that device. Then, the
>> - * event_dev_init() function supplied by the Event driver matching the probed
>> - * device is invoked to properly initialize the device.
>> + * When an Event device is being probed, an *rte_event_dev* structure is allocated
>> + * for it and the event_dev_init() function supplied by the Event driver
>> + * is invoked to properly initialize the device.
>> *
>> - * The role of the device init function consists of resetting the hardware or
>> - * software event driver implementations.
>> + * The role of the device init function is to reset the device hardware or
>> + * to initialize the software event driver implementation.
>> *
>> - * If the device init operation is successful, the correspondence between
>> - * the device identifier assigned to the new device and its associated
>> - * *rte_event_dev* structure is effectively registered.
>> - * Otherwise, both the *rte_event_dev* structure and the device identifier are
>> - * freed.
>> + * If the device init operation is successful, the device is assigned a device
>> + * id (dev_id) for application use.
>> + * Otherwise, the *rte_event_dev* structure is freed.
>> *
>> * The functions exported by the application Event API to setup a device
>> - * designated by its device identifier must be invoked in the following order:
>> + * must be invoked in the following order:
>> * - rte_event_dev_configure()
>> * - rte_event_queue_setup()
>> * - rte_event_port_setup()
>> @@ -130,10 +147,15 @@
>> * - rte_event_dev_start()
>> *
>> * Then, the application can invoke, in any order, the functions
>> - * exported by the Event API to schedule events, dequeue events, enqueue events,
>> - * change event queue(s) to event port [un]link establishment and so on.
>> - *
>> - * Application may use rte_event_[queue/port]_default_conf_get() to get the
>> + * exported by the Event API to dequeue events, enqueue events,
>> + * and link and unlink event queue(s) to event ports.
>> + *
>> + * Before configuring a device, an application should call rte_event_dev_info_get()
>> + * to determine the capabilities of the event device, and any queue or port
>> + * limits of that device. The parameters set in the various device configuration
>> + * structures may need to be adjusted based on the max values provided in the
>> + * device information structure returned from the info_get API.
>
> Can we add full name of info_get()?
More information about the dev
mailing list