[DPDK/other Bug 1533] testpmd performance drops with Mellanox ConnectX6 when using 8 cores 8 queues
bugzilla at dpdk.org
bugzilla at dpdk.org
Thu Sep 5 14:53:59 CEST 2024
https://bugs.dpdk.org/show_bug.cgi?id=1533
Bug ID: 1533
Summary: testpmd performance drops with Mellanox ConnectX6
when using 8 cores 8 queues
Product: DPDK
Version: 23.11
Hardware: All
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: other
Assignee: dev at dpdk.org
Reporter: wangliangxing at hygon.cn
Target Milestone: ---
Created attachment 287
--> https://bugs.dpdk.org/attachment.cgi?id=287&action=edit
mpps and packets stats of 8 cores 8 queues
Environment: Intel Cascadelak server running Centos 7.
Mellanox ConnectX6 NIC and used cores are on same one NUMA node.
Input traffic is always line rate 100Gbps, 64 bytes packets, 256 flows.
Test duration is 30 seconds.
Run testpmd io mode with 7 cores and 7 queues: ./dpdk-testpmd -l 24-32 -n 4 -a
af:00.0 -- --nb-cores=7 --rxq=7 --txq=7 -i
Rx/Tx throughput is 91.6/91.6 MPPS. No TX-dropped packets.
However, run testpmd io mode with 8 cores and 8 queues: ./dpdk-testpmd -l 24-32
-n 4 -a af:00.0 -- --nb-cores=8 --rxq=8 --txq=8 -i
Rx/Tx throughput is 113.6/85.4 MPPS. The tx is lower than tx of 7 core. There
are a lot of TX-dropped. Please refer to attached picture.
I notice similar issue on other x86 and aarch64 servers too.
--
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/dev/attachments/20240905/cdaeb733/attachment.htm>
More information about the dev
mailing list