[dpdk-dev] [PATCH v3 1/4] ethdev: add a field for rxq info structure

Chengchang Tang tangchengchang at huawei.com
Mon Sep 7 14:06:32 CEST 2020


Hi Matan

On 2020/9/7 16:28, Matan Azrad wrote:
> 
> Hi Chengchang
> 
> From: Chengchang Tang:
>> Hi Matan
>>
>> On 2020/9/6 21:45, Matan Azrad wrote:
>>>
>>> Hi  Chengchang
>>>
>>> From: Chengchang Tang:
>>>> Hi, Matan
>>>>
>>>> On 2020/9/2 18:30, Matan Azrad wrote:
>>>>> Hi Chengchang
>>>>>
>>>>> From: Chengchang Tang
>>>>>> Hi, Matan
>>>>>>
>>>>>> On 2020/9/2 15:19, Matan Azrad wrote:
>>>>>>>
>>>>>>> Hi Chengchang
>>>>>>>
>>>>>>> From: Chengchang Tang
>>>>>>>> Hi, Matan
>>>>>>>>
>>>>>>>> On 2020/9/1 23:33, Matan Azrad wrote:
>>>>>>>>>
>>>>>>>>> Hi Chengchang
>>>>>>>>>
>>>>>>>>> Please see some question below.
>>>>>>>>>
>>>>>>>>> From: Chengchang Tang
>>>>>>>>>> Add a field named rx_buf_size in rte_eth_rxq_info to indicate
>>>>>>>>>> the buffer size used in receiving packets for HW.
>>>>>>>>>>
>>>>>>>>>> In this way, upper-layer users can get this information by
>>>>>>>>>> calling rte_eth_rx_queue_info_get.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Chengchang Tang
>> <tangchengchang at huawei.com>
>>>>>>>>>> Reviewed-by: Wei Hu (Xavier) <xavier.huwei at huawei.com>
>>>>>>>>>> Acked-by: Andrew Rybchenko <arybchenko at solarflare.com>
>>>>>>>>>> ---
>>>>>>>>>>  lib/librte_ethdev/rte_ethdev.h | 2 ++
>>>>>>>>>>  1 file changed, 2 insertions(+)
>>>>>>>>>>
> <snip>
>>>>> So the user can configure X and the driver will use Y!=X?
>>>>
>>>> Yes, it depends on the HW. In the queue setup API, it just checks
>>>> whether the input is greater than the required minimum value. But HW
>>>> usually has requirements for alignment and so on.
>>>> So when X does not meet these requirements, PMDs will calculate a new
>>>> value Y that meets these requirements to configure the hardware (Y <=
>>>> X, to ensure no memory overflow occurs).
>>>>> Should the application validate its own configurations after setting
>>>>> them
>>>> successfully?
>>>>
>>>> It depends on their own needs. The application should not be forced
>>>> to verify it to avoid affecting the ease of use of PMDs. For some
>>>> applications, they don't care about this value.
>>>
>>> I understand,
>>> It looks me like a bad ping-pong between app and PMD (for all the
>>> fields in the struct), And we should avoid adding fields to this structure if
>> we can.
>>>
>>> What's about adding field in rte_eth_dev_info to expose the rx buffer
>> alignment supported by the PMD?
>>> Then, application has all the knowledge you want to expose before the
>> configuration.
>>
>> This may not work because there may be other restrictions besides
>> alignment, which are related to the hardware design. Therefore, it is difficult
>> to describe all constraints in a single field.
>> Moreover, this approach seems to
>> constrain the PMDs and HW to some extent.
> 
> Ok, so maybe other ethdev capability API to get the Rx buffer size adjustment by the PMD?
> Don't you think it is important information to the application in order to decide the mempool buffer size \ enabling scatter?

I guess what you mean is that it's more like a capability, so the application should query it through a capability API.
If i understand correctly, I agree with that. But I think it's okay to use this structure to export queue related information
at runtime. It focuses on querying the current queue configuration.

And there seems to be no suitable API for querying this capability. Maybe we need to introduce a new API to do this. But
I'm not sure if it's really necessary.

> 
> In any case, I think you should add documentation in the RX setup API that the HW buf size may be changed by the PMD.

There is not a defined rule of how to configure Rx buffer size. That is, there is no specific method to configure the Rx
buffer size for applications. However, most PMDs configure the Rx buffer size base on the data size of the mempool. So,
if the description is added to the setup API, the method for configuring the Rx buffer size is determined. I think this
issue should involve more people in the discussion, maybe we should send a separate patch.
> 
> <snip>
> 
> .
> 



More information about the dev mailing list