[PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure
Mattias Rönnblom
hofors at lysator.liu.se
Fri Feb 2 10:24:54 CET 2024
On 2024-01-31 15:37, Bruce Richardson wrote:
> On Wed, Jan 24, 2024 at 12:51:03PM +0100, Mattias Rönnblom wrote:
>> On 2024-01-23 10:43, Bruce Richardson wrote:
>>> On Tue, Jan 23, 2024 at 10:35:02AM +0100, Mattias Rönnblom wrote:
>>>> On 2024-01-19 18:43, Bruce Richardson wrote:
>>>>> Some small rewording changes to the doxygen comments on struct
>>>>> rte_event_dev_info.
>>>>>
>>>>> Signed-off-by: Bruce Richardson <bruce.richardson at intel.com>
>>>>> ---
>>>>> lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
>>>>> 1 file changed, 25 insertions(+), 21 deletions(-)
>>>>>
>>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>>>> index 57a2791946..872f241df2 100644
>>>>> --- a/lib/eventdev/rte_eventdev.h
>>>>> +++ b/lib/eventdev/rte_eventdev.h
>>>>> @@ -482,54 +482,58 @@ struct rte_event_dev_info {
>>>>> const char *driver_name; /**< Event driver name */
>>>>> struct rte_device *dev; /**< Device information */
>>>>> uint32_t min_dequeue_timeout_ns;
>>>>> - /**< Minimum supported global dequeue timeout(ns) by this device */
>>>>> + /**< Minimum global dequeue timeout(ns) supported by this device */
>>>>
>>>> Are we missing a bunch of "." here and in the other fields?
>>>>
>>>>> uint32_t max_dequeue_timeout_ns;
>>>>> - /**< Maximum supported global dequeue timeout(ns) by this device */
>>>>> + /**< Maximum global dequeue timeout(ns) supported by this device */
>>>>> uint32_t dequeue_timeout_ns;
>>>>> /**< Configured global dequeue timeout(ns) for this device */
>>>>> uint8_t max_event_queues;
>>>>> - /**< Maximum event_queues supported by this device */
>>>>> + /**< Maximum event queues supported by this device */
>>>>> uint32_t max_event_queue_flows;
>>>>> - /**< Maximum supported flows in an event queue by this device*/
>>>>> + /**< Maximum number of flows within an event queue supported by this device*/
>>>>> uint8_t max_event_queue_priority_levels;
>>>>> /**< Maximum number of event queue priority levels by this device.
>>>>> - * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
>>>>> + * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
>>>>> + * The priority levels are evenly distributed between
>>>>> + * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
>>>>
>>>> This is a change of the API, in the sense it's defining something previously
>>>> left undefined?
>>>>
>>>
>>> Well, undefined is pretty useless for app writers, no?
>>> However, agreed that the range between HIGHEST and LOWEST is an assumption
>>> on my part, chosen because it matches what happens to the event priorities
>>> which are documented in event struct as "The implementation shall normalize
>>> the requested priority to supported priority value" - which, while better
>>> than nothing, does technically leave the details of how normalization
>>> occurs up to the implementation.
>>>
>>>> If you need 6 different priority levels in an app, how do you go about
>>>> making sure you find the correct (distinct) Eventdev levels on any event
>>>> device supporting >= 6 levels?
>>>>
>>>> #define NUM_MY_LEVELS 6
>>>>
>>>> #define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) *
>>>> (RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) /
>>>> NUM_MY_LEVELS)
>>>>
>>>> This way? One would worry a bit exactly what "evenly" means, in terms of
>>>> rounding errors. If you have an event device with 255 priority levels of
>>>> (say) 256 levels available in the API, which two levels are the same
>>>> priority?
>>>
>>> Yes, round etc. will be an issue in cases of non-powers-of 2.
>>> However, I think we do need to clarify this behaviour, so I'm open to
>>> alternative suggestions as to how update this.
>>>
>>
>> In retrospect, maybe it would have been better to just express the number of
>> priority levels an event device supported, only allow [0, max_levels - 1] in
>> the prio field, and leave it to the app to do the conversion/normalization.
>>
>
> Yes, in many ways that would be better.
>
>> Maybe a new <rte_eventdev.h> helper macro would at least suggest to the PMD
>> driver implementer and the application designer how this normalization
>> should work. Something like the above, but where NUM_MY_LEVELS is an input
>> parameter. Would result in an integer division though, so shouldn't be used
>> in the fast path.
>
> I think it's actually ok now, having the drivers do the work, since each
> driver can choose optimal method. For those having power-of-2 number of
> priorities, just a shift op works best.
>
I had an application-usable macro in mind, not a macro for PMDs. Showing
how app-level priority levels should map to Eventdev API-level priority
levels would, by implication, show how event device should do the
Eventdev API priority -> PMD level priority compression.
The event device has exactly zero freedom in choosing how to translate
Eventdev API-level priorities to its internal priorities, or risk not
differentiating between app-level priority levels. If an event device
supports 128 levels, is RTE_EVENT_DEV_PRIORITY_NORMAL and
RTE_EVENT_DEV_PRIORITY_NORMAL-1 the same PMD-level priority or not? I
would *guess* the same, but that just an assumption, and not something
that follows from "normalize", I think.
Anyway, this is not a problem this patch set necessarily needs to solve.
> The key thing for the documentation here, to my mind, is to make it clear
> that though the number of priorities is N, you still specify the relative
> priorities in the range of 0-255. This is documented in the queue
> configuration, so, while we could leave it unmentionned here, I think for
> clarity it should be called out. I'm going to reword slightly as:
>
> * The implementation shall normalize priority values specified between
> * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST
> * to map them internally to this range of priorities.
> *
> * @see rte_event_queue_conf.priority
>
> This way the wording in the two places is similar.
>
> /Bruce
More information about the dev
mailing list