<div dir="ltr">To answer your original question, yes, I've hit 400Gbps many times provided there were enough queues/cores. You can also see test 12.2 here achieved line rate:<div><br></div><div><a href="https://fast.dpdk.org/doc/perf/DPDK_23_11_NVIDIA_NIC_performance_report.pdf">https://fast.dpdk.org/doc/perf/DPDK_23_11_NVIDIA_NIC_performance_report.pdf</a><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jul 24, 2024 at 4:58 AM Yasuhiro Ohara <<a href="mailto:yasu1976@gmail.com">yasu1976@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Thomas,<br>
Thank you for getting back to this.<br>
<br>
>From what we have seen, it seemed to be rather a Mellanox firmware issue<br>
than the pktgen or DPDK issue.<br>
When we were using ConnectX-7 with fw ver 28.41.1000<br>
on a Core i9 machine, it limited its tx-bandwidth to 100Gbps,<br>
though the physical max should be 400Gbps.<br>
If we lower the fw version to 28.39.2048 or 28.39.3004,<br>
it could send 256Gbps, on the same core i9 machine.<br>
<br>
Our bandwidth demonstration event completed successfully,<br>
so we are fine with the small knowledge.<br>
I think my colleague can help if someone wants to debug<br>
further about this issue.<br>
<br>
Thank you anyway.<br>
<br>
Regards,<br>
Yasu<br>
<br>
2024年7月23日(火) 2:37 Thomas Monjalon <<a href="mailto:thomas@monjalon.net" target="_blank">thomas@monjalon.net</a>>:<br>
><br>
> Hello,<br>
><br>
> I see there is no answer.<br>
><br>
> Did you try with testpmd?<br>
> I don't know whether it can be a limitation of dpdk-pktgen?<br>
><br>
><br>
><br>
> 29/05/2024 03:32, Yasuhiro Ohara:<br>
> > Hi. My colleague is trying to generate 400Gbps traffic using ConnectX-7,<br>
> > but with no luck.<br>
> ><br>
> > Has anyone succeeded in generating 400Gbps (by larger than 1500B is ok),<br>
> > or are there any known issues?<br>
> ><br>
> > Using dpdk-pktgen, the link is successfully recognized as 400GbE<br>
> > but when we start the traffic, it seems to be capped at 100Gbps.<br>
> ><br>
> > Some info follows.<br>
> ><br>
> > [ 1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000<br>
> > [ 1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe<br>
> > bandwidth (32.0 GT/s PCIe x16 link)<br>
> > [ 1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are<br>
> > supported, range: 0Mbps to 195312Mbps<br>
> > [ 1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per<br>
> > vport: max uc(128) max mc(2048)<br>
> > [ 1.827270] mlx5_core 0000:01:00.0: Port module event: module 0,<br>
> > Cable unplugged<br>
> > [ 1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9):<br>
> > PCIe slot advertised sufficient power (75W).<br>
> > [ 1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8)<br>
> > StrdSz(2048) RxCqeCmprss(0)<br>
> > [ 1.929813] mlx5_core 0000:01:00.0: Supported tc offload range -<br>
> > chains: 4294967294, prios: 4294967295<br>
> ><br>
> > Device type: ConnectX7<br>
> > Name: MCX75310AAS-NEA_Ax<br>
> > Description: NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB<br>
> > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;<br>
> > Secure Boot Enabled;<br>
> > Device: /dev/mst/mt4129_pciconf0<br>
> ><br>
> > Enabled Link Speed (Ext.) : 0x00010000 (400G_4X)<br>
> > Supported Cable Speed (Ext.) : 0x00010800 (400G_4X,100G_1X)<br>
> ><br>
> > pktgen result:<br>
> ><br>
> > Pkts/s Rx : 0<br>
> > TX : 8059264<br>
> ><br>
> > MBits/s Rx/Tx : 0/97742<br>
> ><br>
> ><br>
> > Any info is appreciated. Thanks.<br>
> ><br>
> > regards,<br>
> > Yasu<br>
> ><br>
><br>
><br>
><br>
><br>
><br>
</blockquote></div>