<div dir="ltr">I didn't mention that the packet size used for the tests was 68 Bytes.<div><br></div><div>Also note there is a typo in the profile setting, the PIR rate used was 1200000000 (not 150000000). However this does not seem to make any difference to the results.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jun 28, 2024 at 5:53 AM Tony Hart <<a href="mailto:tony.hart@domainhart.com">tony.hart@domainhart.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I'm seeing an unexpected performance drop on the CX7 when using 3 groups (with a policer) versus 3 groups (without policer) versus (2 groups without policer). The performance of each configuration is 72 Gbps, 104 Gbps and 124 Gbps respectively. So the first configuration drops to almost half the performance of the third even though all 3 are just hairpinning packets (the policier is used to color the packet only, no fate actions are taken as a result).<br><br>This is on a 400G link and using SWS mode. I know there was a similar issue reported on this mailing list recently related to SWS versus HWS performance, but this issue seems different.<br><br>Any thoughts welcome.<div><br></div><div>thanks</div><div>tony</div><div><br></div><div>These are the testpmd commands used to recreate the issue...<br><br><u>Common commands:<br></u>add port meter profile trtcm_rfc4115 0 1 1000 150000000 1000 1000 1<br><br>add port meter policy 0 1 g_actions end y_actions end r_actions drop / end<br><br>create port meter 0 1 1 1 yes 0xffff 0 g 0<br><br><br><u>3 groups with policer<br></u><br>flow create 0 ingress group 0 pattern end actions jump group 1 / end<br><br>flow create 0 ingress group 1 pattern end actions meter mtr_id 1 / jump group 2 / end<br><br>flow create 0 ingress group 2 pattern eth / ipv4 / end actions count / rss queues 6 7 8 9 end / end<br><br><br><u>3 groups without policer<br></u><br>flow create 0 ingress group 0 pattern end actions jump group 1 / end<br><br>flow create 0 ingress group 1 pattern end actions jump group 2 / end<br><br>flow create 0 ingress group 2 pattern eth / ipv4 / end actions count / rss queues 6 7 8 9 end / end<br><br><br><u>2 groups without policer<br></u><br>flow create 0 ingress group 0 pattern end actions jump group 1 / end<br><br>flow create 0 ingress group 1 pattern eth / ipv4 / end actions count / rss queues 6 7 8 9 end / end<br><br>thanks,<br>tony<div><br></div><div><u>testpmd command line</u></div><div>/dpdk-testpmd -l8-14 -a81:00.0,dv_flow_en=1 -- -i --nb-cores 6 --rxq 6 --txq 6 --port-topology loop --forward-mode=rxonly --hairpinq 4 --hairpin-mode 0x10<br></div><div><br></div><div><br></div><div><u>Versions</u></div></div><div>mlnx-ofa_kernel-24.04-OFED.24.04.0.6.6.1.rhel9u4.x86_64<br>kmod-mlnx-ofa_kernel-24.04-OFED.24.04.0.6.6.1.rhel9u4.x86_64<br>mlnx-ofa_kernel-devel-24.04-OFED.24.04.0.6.6.1.rhel9u4.x86_64<br>ofed-scripts-24.04-OFED.24.04.0.6.6.x86_64<br><br>DPDK: v24.03<div></div><br></div></div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature">tony<br></div>