[RFC 2/5] ethdev: introduce the affinity field in Tx queue API
Ori Kam
orika at nvidia.com
Wed Jan 11 17:47:31 CET 2023
Hi Jiawei,
> -----Original Message-----
> From: Jiawei(Jonny) Wang <jiaweiw at nvidia.com>
> Sent: Wednesday, 21 December 2022 12:30
>
> For the multiple hardware ports connect to a single DPDK port (mhpsdp),
> the previous patch introduces the new rte flow item to match the port
> affinity of the received packets.
>
> This patch adds the tx_affinity setting in Tx queue API, the affinity value
> reflects packets be sent to which hardware port.
>
> Adds the new tx_affinity field into the padding hole of rte_eth_txconf
> structure, the size of rte_eth_txconf keeps the same. Adds a suppress
> type for structure change in the ABI check file.
>
> This patch adds the testpmd command line:
> testpmd> port config (port_id) txq (queue_id) affinity (value)
>
> For example, there're two hardware ports connects to a single DPDK
> port (port id 0), and affinity 1 stood for hard port 1 and affinity
> 2 stood for hardware port 2, used the below command to config
> tx affinity for each TxQ:
> port config 0 txq 0 affinity 1
> port config 0 txq 1 affinity 1
> port config 0 txq 2 affinity 2
> port config 0 txq 3 affinity 2
>
> These commands config the TxQ index 0 and TxQ
index 1 with affinity 1,
> uses TxQ 0 or TxQ 1 send packets, these packets will be sent from the
> hardware port 1, and similar with hardware port 2 if sending packets
> with TxQ 2 or TxQ 3.
>
> Signed-off-by: Jiawei Wang <jiaweiw at nvidia.com>
> ---
Acked-by: Ori Kam <orika at nvidia.com>
Best,
Ori
More information about the dev
mailing list