[PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API

Andrew Rybchenko andrew.rybchenko at oktetlabs.ru
Thu Feb 2 10:28:44 CET 2023


On 2/1/23 18:50, Jiawei(Jonny) Wang wrote:
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko at oktetlabs.ru>
>> Subject: Re: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue
>> API
>>
>> On 1/30/23 20:00, Jiawei Wang wrote:
>>> Adds the new tx_phy_affinity field into the padding hole of
>>> rte_eth_txconf structure, the size of rte_eth_txconf keeps the same.
>>> Adds a suppress type for structure change in the ABI check file.
>>>
>>> This patch adds the testpmd command line:
>>> testpmd> port config (port_id) txq (queue_id) phy_affinity (value)
>>>
>>> For example, there're two hardware ports 0 and 1 connected to a single
>>> DPDK port (port id 0), and phy_affinity 1 stood for hardware port 0
>>> and phy_affinity 2 stood for hardware port 1, used the below command
>>> to config tx phy affinity for per Tx Queue:
>>>           port config 0 txq 0 phy_affinity 1
>>>           port config 0 txq 1 phy_affinity 1
>>>           port config 0 txq 2 phy_affinity 2
>>>           port config 0 txq 3 phy_affinity 2
>>>
>>> These commands config the TxQ index 0 and TxQ index 1 with phy
>>> affinity 1, uses TxQ 0 or TxQ 1 send packets, these packets will be
>>> sent from the hardware port 0, and similar with hardware port 1 if
>>> sending packets with TxQ 2 or TxQ 3.
>>
>> Frankly speaking I dislike it. Why do we need to expose it on generic ethdev
>> layer? IMHO dynamic mbuf field would be a better solution to control Tx
>> routing to a specific PHY port.
>>
> 
> OK, the phy affinity is not part of packet information(like timestamp).

Why? port_id is a packet information. Why phy_subport_id is not
a packet information.

> And second, the phy affinity is Queue layer, that is, the phy affinity value
> should keep the same behavior per Queue.
> After the TxQ was created, the packets should be sent the same physical port
> If using the same TxQ index.

Why are these queues should be visible to DPDK application?
Nobody denies you to create many HW queues behind one ethdev
queue. Of course, there questions related to descriptor status
API in this case, but IMHO it would be better than exposing
these details to an application level.

> 
>> IMHO, we definitely need dev_info information about a number of physical
>> ports behind. Advertising value greater than 0 should mean that PMD supports
>> corresponding mbuf dynamic field to contol ongoing physical port on Tx (or
>> should just reject packets on prepare which try to specify outgoing phy port
>> otherwise). In the same way the information may be provided on Rx.
>>
> 
> See above, I think phy affinity is Queue level not for each packet.
> 
>> I'm OK to have 0 as no phy affinity value and greater than zero as specified phy
>> affinity. I.e. no dynamic flag is required.
>>
> 
> Thanks for agreement.
> 
>> Also I think that order of patches should be different.
>> We should start from a patch which provides dev_info and flow API matching
>> and action should be in later patch.
>>
> 
> OK.
>   
>>>
>>> Signed-off-by: Jiawei Wang <jiaweiw at nvidia.com>
>>
>> [snip]
> 



More information about the dev mailing list