ConnectX-7 400GbE wire-rate?

Yasuhiro Ohara yasu1976 at gmail.com
Wed Jul 24 18:07:44 CEST 2024


Hi Cliff.

I saw it, and thought at that time that 200GbE x 2 might be different
from 400GbE x 1.
But good to know that you hit 400Gbps multiple times.

Thank you.

regards,
Yasu


2024年7月24日(水) 23:50 Cliff Burdick <shaklee3 at gmail.com>:
>
> To answer your original question, yes, I've hit 400Gbps many times provided there were enough queues/cores. You can also see test 12.2 here achieved line rate:
>
> https://fast.dpdk.org/doc/perf/DPDK_23_11_NVIDIA_NIC_performance_report.pdf
>
> On Wed, Jul 24, 2024 at 4:58 AM Yasuhiro Ohara <yasu1976 at gmail.com> wrote:
>>
>> Hi Thomas,
>> Thank you for getting back to this.
>>
>> From what we have seen, it seemed to be rather a Mellanox firmware issue
>> than the pktgen or DPDK issue.
>> When we were using ConnectX-7 with fw ver 28.41.1000
>> on a Core i9 machine, it limited its tx-bandwidth to 100Gbps,
>> though the physical max should be 400Gbps.
>> If we lower the fw version to  28.39.2048 or 28.39.3004,
>> it could send 256Gbps, on the same core i9 machine.
>>
>> Our bandwidth demonstration event completed successfully,
>> so we are fine with the small knowledge.
>> I think my colleague can help if someone wants to debug
>> further about this issue.
>>
>> Thank you anyway.
>>
>> Regards,
>> Yasu
>>
>> 2024年7月23日(火) 2:37 Thomas Monjalon <thomas at monjalon.net>:
>> >
>> > Hello,
>> >
>> > I see there is no answer.
>> >
>> > Did you try with testpmd?
>> > I don't know whether it can be a limitation of dpdk-pktgen?
>> >
>> >
>> >
>> > 29/05/2024 03:32, Yasuhiro Ohara:
>> > > Hi. My colleague is trying to generate 400Gbps traffic using ConnectX-7,
>> > > but with no luck.
>> > >
>> > > Has anyone succeeded in generating 400Gbps (by larger than 1500B is ok),
>> > > or are there any known issues?
>> > >
>> > > Using dpdk-pktgen, the link is successfully recognized as 400GbE
>> > > but when we start the traffic, it seems to be capped at 100Gbps.
>> > >
>> > > Some info follows.
>> > >
>> > > [    1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000
>> > > [    1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe
>> > > bandwidth (32.0 GT/s PCIe x16 link)
>> > > [    1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are
>> > > supported, range: 0Mbps to 195312Mbps
>> > > [    1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per
>> > > vport: max uc(128) max mc(2048)
>> > > [    1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,
>> > > Cable unplugged
>> > > [    1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):
>> > > PCIe slot advertised sufficient power (75W).
>> > > [    1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)
>> > > StrdSz(2048) RxCqeCmprss(0)
>> > > [    1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -
>> > > chains: 4294967294, prios: 4294967295
>> > >
>> > > Device type:    ConnectX7
>> > > Name:           MCX75310AAS-NEA_Ax
>> > > Description:    NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB
>> > > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;
>> > > Secure Boot Enabled;
>> > > Device:         /dev/mst/mt4129_pciconf0
>> > >
>> > > Enabled Link Speed (Ext.)       : 0x00010000 (400G_4X)
>> > > Supported Cable Speed (Ext.)    : 0x00010800 (400G_4X,100G_1X)
>> > >
>> > > pktgen result:
>> > >
>> > > Pkts/s Rx         :              0
>> > >            TX         :  8059264
>> > >
>> > > MBits/s Rx/Tx  :   0/97742
>> > >
>> > >
>> > > Any info is appreciated. Thanks.
>> > >
>> > > regards,
>> > > Yasu
>> > >
>> >
>> >
>> >
>> >
>> >


More information about the users mailing list