ask for TXA_FLUSH_THRESHOLD change
Jaeeun Ham
jaeeun.ham at ericsson.com
Wed Jun 22 04:13:56 CEST 2022
Hi,
Could you guide me on how to eliminate or reduce tx drop/retry?
TXA_FLUSH_THRESHOLD helped somewhat but it was not cleared packet tx drop.
========[ TX adapter stats ]========
tx_retry: 17499893
tx_packets: 7501716
tx_dropped: 5132458
BR/Jaeeun
From: Jayatheerthan, Jay <jay.jayatheerthan at intel.com>
Sent: Thursday, June 16, 2022 3:40 PM
To: Jaeeun Ham <jaeeun.ham at ericsson.com>; dev at dpdk.org
Cc: Jerin Jacob <jerinj at marvell.com>
Subject: RE: ask for TXA_FLUSH_THRESHOLD change
Hi Jaeeun,
See my responses inline below.
-Jay
From: Jaeeun Ham <jaeeun.ham at ericsson.com<mailto:jaeeun.ham at ericsson.com>>
Sent: Monday, June 13, 2022 5:51 AM
To: dev at dpdk.org<mailto:dev at dpdk.org>
Cc: Jerin Jacob <jerinj at marvell.com<mailto:jerinj at marvell.com>>; Jayatheerthan, Jay <jay.jayatheerthan at intel.com<mailto:jay.jayatheerthan at intel.com>>
Subject: ask for TXA_FLUSH_THRESHOLD change
Hi,
There were latency delay when I increase dpdk(20.11.1) core. (one worker core was okay.)
When I decrease the TXA_FLUSH_THRESHOLD value(1024 to 32), it was okay
It's TXA_FLUSH_THRESHOLD at lib/librte_eventdev/rte_event_eth_tx_adapter.c. // https://git.dpdk.org/dpdk-stable/tree/lib/librte_eventdev/rte_event_eth_tx_adapter.c?h=20.11#n15<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-dff86f66a37f4e0e&q=1&e=9d9b2326-a7ae-405b-bc88-9a28d777c306&u=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Ftree%2Flib%2Flibrte_eventdev%2Frte_event_eth_tx_adapter.c%3Fh%3D20.11%23n15>
When TXA_FLUSH_THRESHOLD value was changed from 1024 to 32, the latency test result was fine on 10 cores for low traffic(DL:20Mbps/UL:17kbps).
I think that this can make call for rte_eth_tx_buffer_flush() more frequently.
But, I'm not sure whether this approach can cause worse performance or not.
Do you have any opinion about this?
[Jay] Yes, it will cause rte_eth_tx_buffer_flush() to be called more often. It can lead to lesser batching benefit. Typical performance vs. latency trade-off decision apply here.
Similar RDK RTE_BRIDGE_ETH_TX_FLUSH_THOLD is patched on DUSG3 from 1024 to smaller value since DPDK 18.11.2:
I'm not aware of any side-effect, I think it is needed to have low enough latency even at low traffic rates. For more details see Intel FP 22288<https://footprints.intel.com/MRcgi/MRlogin.pl?DL=22288DA14>.
[Jay] Currently, TXA_FLUSH_THRESHOLD is not a configurable attr.
TXA_MAX_NB_TX(128) looks the same as CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE(16384), then is it also should be tuned?
[Jay] They both are different attributes. TXA_MAX_NB_TX refers to max number of queues in Tx adapter. CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE refers to event buffer size in Rx BD.
--- dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base.orig 2020-01-29 15:05:10.000000000 +0100
+++ dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base 2020-01-29 15:11:10.000000000 +0100
@@ -566,9 +566,9 @@
CONFIG_RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES=100
CONFIG_RTE_MAX_BRIDGE_ETH_INSTANCE=4
CONFIG_RTE_BRIDGE_ETH_INTR_RING_SIZE=32
-CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=128
+CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=16384
CONFIG_RTE_LIBRTE_BRIDGE_ETH_DEBUG=n
-CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=1024
+CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=32
--- dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h 2021-08-05 23:46:52.051051000 +0200
+++ dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h 2021-08-06 00:50:07.310766255 +0200
@@ -175,8 +175,8 @@
#define RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES 100
#define RTE_MAX_BRIDGE_ETH_INSTANCE 4
#define RTE_BRIDGE_ETH_INTR_RING_SIZE 32
-#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 128
-#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 1024
+#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 16384
+#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 10
#undef RTE_BRIDGE_ETH_TX_MULTI_PKT_EVENT
BR/Jaeeun
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/dev/attachments/20220622/3f77c13c/attachment-0001.htm>
More information about the dev
mailing list