[dpdk-dev] Enable TPH in i40e PMD

Phil Yang (Arm Technology China) Phil.Yang at arm.com
Tue Nov 26 09:46:36 CET 2019


Hi Qi,

I hope this mail finds you well.
I am trying to benchmark the TLP Processing Hints (TPH) feature in XL710 NIC i40e PMD on our platform. Our test server supports the TPH identification feature.
But It seems that the NIC can't work as expected in TPH enabled mode.

I am not sure I did the right operation while opening TPH feature for XL710.  So please correct me if I am wrong.
According to the XL710 datasheet (3.1.2.6.2 TLP Processing Hints (TPH)<https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xl710-10-40-controller-datasheet.pdf>), I did the following instructions to enable XL710 TPH:

  1.  Set TPH enable bits in the receive and transmit queue context;
  2.  Set the CPUID fields in the receive and transmit queue context to offer steering information;
Step 1 & 2 modifications are list below. I fixed the cupid to 3 in this test.
----
|  drivers/net/i40e/i40e_rxtx.c | 6 ++++++
|  1 file changed, 6 insertions(+)
|
| diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
| index 17dc8c7..0b0ffe1 100644
| --- a/drivers/net/i40e/i40e_rxtx.c
| +++ b/drivers/net/i40e/i40e_rxtx.c
| @@ -2517,6 +2517,11 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq)
|        if (vsi->type == I40E_VSI_FDIR)
|                tx_ctx.fd_ena = TRUE;
|
| +      tx_ctx.cpuid = 3; /* The last core on the server */
| +      tx_ctx.tphrdesc_ena = 1;
| +      tx_ctx.tphrpacket_ena = 1;
| +      tx_ctx.tphwdesc_ena = 1;
| +
|        err = i40e_clear_lan_tx_queue_context(hw, pf_q);
|        if (err != I40E_SUCCESS) {
|                PMD_DRV_LOG(ERR, "Failure of clean lan tx queue context");
| @@ -2665,6 +2670,7 @@ i40e_rx_queue_init(struct i40e_rx_queue *rxq)
|
|        rx_ctx.base = rxq->rx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT;
|        rx_ctx.qlen = rxq->nb_rx_desc;
| +      rx_ctx.cpuid = 3; /* There are 4 cores, this is the last core */
|  #ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC
|        rx_ctx.dsize = 1;
|  #endif
---


  1.  Bind XL710 interface with vfio-pci driver and run testpmd.
$ sudo ./build/app/dpdk-testpmd -l 2-3 -n 4 --socket-mem=2048 -- -i

  1.  Set  PCIe configuration TPH Requester Control register ( Enable TPH Requester & Set ST Mode to Device Specific Mode) [11.4.5.3 TPH Requester Control Register (0x1A8; R/W)]<https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xl710-10-40-controller-datasheet.pdf>
Before Setting:
$ sudo lspci -vvvvxxxx -s 0001:01:00.0 | grep 1a0
        Capabilities: [1a0 v1] Transaction Processing Hints
1a0: 17 00 01 1b 05 00 00 00 00 00 00 00 00 00 00 00
$ sudo lspci -vvvvxxxx -s 0001:01:00.1 | grep 1a0
        Capabilities: [1a0 v1] Transaction Processing Hints
1a0: 17 00 01 1b 05 00 00 00 00 00 00 00 00 00 00 00

Set TPH:
$ sudo setpci -s 0001:01:00.0 1A8.L=102
$ sudo setpci -s 0001:01:00.1 1A8.L=102

After Setting:
$ sudo lspci -vvvvxxxx -s 0001:01:00.0 | grep 1a0
        Capabilities: [1a0 v1] Transaction Processing Hints
1a0: 17 00 01 1b 05 00 00 00 02 01 00 00 00 00 00 00
$ sudo lspci -vvvvxxxx -s 0001:01:00.0 | grep 1a0
        Capabilities: [1a0 v1] Transaction Processing Hints
1a0: 17 00 01 1b 05 00 00 00 02 01 00 00 00 00 00 00



  1.  Start testpmd io forwarding with core 3 and start RFC2544 test.

Test Result (Use the default XL710 configuration RFC2544 test result as the baseline ) :

  1.  Followed the above instructions 1 - 5, the throughput dropped 67%;
  2.  Skipped steps 1-2 and followed steps 3 - 5, the throughput dropped 67%;
  3.  Changed the cupid to 4 (to verify whether the cupid is counting from 1)  and remeasured with RFC2544, the throughput dropped 67%.

So after we configured the TPH requester control register, the throughput performance dropped rapidly.
Do you know what is the reason for that? Could you show me the right steps to open the TPH feature for XL710 i40e PMD?

Thanks in advance!

Best Regards,
Phil Yang


More information about the dev mailing list