[dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets

Yan Lei l.yan at epfl.ch
Thu Mar 12 00:44:33 CET 2020


The previous Email has format problem, sorry for that updated version below!

Hi,

I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. With 256B packets I can get 98Gb/s.

Has anyone saw the same performance on these NICs? I checked the perf. report on https://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.

Is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)? If not, which firmware/driver/DPDK/system configurations could I tune to get 100Gb/s with 64B packets?

My setup is as following:

- CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz)
- NIC: Mellanox MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16)
- DPDK: 19.05
- RDMA-CORE: v28.0
- Kernel: 5.3.0
- OS: Ubuntu 18.04
- Firmware: 16.26.1040

I measured the TX rate with DPDK's testpmd:

$ ./testpmd -l 3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 NIC queues.

Your feedbacks will be much appreciated.

Thanks,
Lei


________________________________
From: users <users-bounces at dpdk.org> on behalf of Yan Lei <l.yan at epfl.ch>
Sent: Thursday, March 12, 2020 12:40:03 AM
To: users at dpdk.org
Subject: [dpdk-users] [ConnectX-5 MCX515A-CCAT / MCX516A-CCAT] Can only generate 53Gb/s with 64B packets

Hi, I am currently struggling in getting more than 53Gb/s with 64B packets on both of the MCX515A-CCAT and MCX516A-CCAT adapter when running a DPDK app that generates and transmits packets. WIth 256B packets I can get 98Gb/s.

Has anyone saw the same performance on these NICs? I checked the perf. report on https://core.dpdk.org/perf-reports/ but there are no numbers of these NICs.



Is this inherent limitation of these NICs (only reach 100Gb/s with larger packets)? If not, which firmware/driver/DPDK/system configurations could I tune to get 100Gb/s with 64B packets? My setup is as following: - CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz) - NIC: Mellanox MCX515A-CCAT / MCX516A-CCAT (Using only one port for TX, installed on PCIe Gen3 x16) - DPDK: 19.05 - RDMA-CORE: v28.0 - Kernel: 5.3.0 - OS: Ubuntu 18.04 - Firmware: 16.26.1040 I measured the TX rate with DPDK's testpmd: $ ./testpmd -l 3-13 -n 4 -w 02:00.0 -- -i --port-topology=chained --nb-ports=1 --rxq=10 --txq=10 --nb-cores=10 --burst=128 --rxd=512 --txd=512 --mbcache=512 --forward-mode=txonly So 10 cores generating and transmits 64B packets on 10 NIC queues. Your feedbacks will be much appreciated.

Thanks, Lei


More information about the users mailing list