<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Hi Ciara,</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="color: rgb(0, 0, 0);"><span style="font-size: 14.6667px;">Thanks for those suggestions.</span></div>
<div style="color: rgb(0, 0, 0);"><br>
</div>
<div style="color: rgb(0, 0, 0);">Compare to busy polling mode, I prefer to use one more core to pinning IRQ.</div>
<div style="color: rgb(0, 0, 0);"><br>
</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<span style="color: rgb(0, 0, 0); font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;">I configured the queue to 1 on NIC, and I checked smp_affinity is already different with my application.</span></div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<span style="color: rgb(0, 0, 0); font-family: Calibri, Helvetica, sans-serif; font-size: 12pt;">IRQ on core 15/11, but my app is bind to core rx 1/2, tx 3.</span></div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
[root@gf]$ cat /proc/interrupts | grep mlx | grep mlx5_comp0
<div> 63: 48 46227515 0 0 151 0 0 0 0 0 0 0 0 0 1 19037579 PCI-MSI 196609-edge mlx5_comp0@pci:0000:00:0c.0</div>
102: 49 0 0 0 0 1 0 0 0 45030 0 11625905 0 50609158 0 308 PCI-MSI 212993-edge mlx5_comp0@pci:0000:00:0d.0<br>
</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
[root@gf]$ cat /proc/irq/63/smp_affinity
<div>8000</div>
<div>[root@gf]$ cat /proc/irq/102/smp_affinity</div>
0800<br>
</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
The performance is increased a little, but no big changes...</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
I will continue investigate the issue. If there are other tips, please feel free to share with me. Thanks again.
<span id="🙂">🙂</span></div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Br,</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Christian</div>
<div>
<div id="appendonsend"></div>
<div style="font-family:Calibri,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
<br>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>发件人:</b> Loftus, Ciara <ciara.loftus@intel.com><br>
<b>发送时间:</b> 2021年11月5日 7:48<br>
<b>收件人:</b> Hong Christian <hongguochun@hotmail.com><br>
<b>抄送:</b> users@dpdk.org <users@dpdk.org><br>
<b>主题:</b> RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue configuration</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt">
<div class="PlainText">> <br>
> Hi Ciara,<br>
> <br>
> Thank you for your quick response and useful tips.<br>
> That's a good idea to change the rx flow, I will test it later.<br>
> <br>
> Meanwhile, I tested AF_XDP PMD with 1rx/1tx queue configuration. The<br>
> performance is too worse than MLX5 PMD, nearly 2/3 drop... total traffic is<br>
> 3Gbps.<br>
> I also checked some statistics, it shows drops on xdp recv and app internel<br>
> transfer, it seems xdp recv and send take times, since there is no difference<br>
> on app side bettween the two tests(dpdk/xdp).<br>
> <br>
> Are there any extra configuration is required for AF_XDP PMD ?<br>
> The XDP PMD should have similar performance as DPDK PMD under 10Gbps<br>
> ?<br>
<br>
Hi Christian,<br>
<br>
You're welcome. I have some suggestions for improving the performance.<br>
1. Preferred busy polling<br>
If you are willing to upgrade your kernel to >=5.11 and your DPDK to v21.05 you can avail of the preferred busy polling feature. Info on the benefits can be found here:
<a href="http://mails.dpdk.org/archives/dev/2021-March/201172.html">http://mails.dpdk.org/archives/dev/2021-March/201172.html</a><br>
Essentially it should improve the performance for a single core use case (driver and application on same core).<br>
2. IRQ pinning<br>
If you are not using the preferred busy polling feature, I suggest pinning the IRQ for your driver to a dedicated core that is not busy with other tasks eg. the application. For most devices you can find IRQ info in /proc/interrupts and you can change the pinning
by modifying /proc/irq/<irq_number>/smp_affinity<br>
3. Queue configuration<br>
Make sure you are using all queues on the device. Check the output of ethtool -l <iface> and either set the PMD queue_count to equal the number of queues, or reduce the number of queues using ethtool -L <iface> combined N.<br>
<br>
I can't confirm whether the performance should reach that of the driver-specific PMD, but hopefully some of the above helps getting some of the way there.<br>
<br>
Thanks,<br>
Ciara<br>
<br>
> <br>
> Br,<br>
> Christian<br>
> ________________________________________<br>
> 发件人: Loftus, Ciara <<a href="mailto:ciara.loftus@intel.com">mailto:ciara.loftus@intel.com</a>><br>
> 发送时间: 2021年11月4日 10:19<br>
> 收件人: Hong Christian <<a href="mailto:hongguochun@hotmail.com">mailto:hongguochun@hotmail.com</a>><br>
> 抄送: <a href="mailto:users@dpdk.org">mailto:users@dpdk.org</a> <<a href="mailto:users@dpdk.org">mailto:users@dpdk.org</a>>;<br>
> <a href="mailto:xiaolong.ye@intel.com">mailto:xiaolong.ye@intel.com</a> <<a href="mailto:xiaolong.ye@intel.com">mailto:xiaolong.ye@intel.com</a>><br>
> 主题: RE: pmd_af_xdp: does net_af_xdp support different rx/tx queue<br>
> configuration<br>
> <br>
> ><br>
> > Hello DPDK users,<br>
> ><br>
> > Sorry to disturb.<br>
> ><br>
> > I am currently testing net_af_xdp device.<br>
> > But I found the device configure always failed if I configure my rx queue !=<br>
> tx<br>
> > queue.<br>
> > In my project, I use pipeline mode, and require 1 rx and several tx queues.<br>
> ><br>
> > Example:<br>
> > I run my app with paramter: "--no-pci --vdev<br>
> > net_af_xdp0,iface=ens12,queue_count=2 --vdev<br>
> > net_af_xdp1,iface=ens13,queue_count=2"<br>
> > And config 1 rx and 2 tx queue, it will setup failed by print: "Port0<br>
> > dev_configure = -22"<br>
> ><br>
> > After checking some xdp docs, I found the rx and tx always bind to use,<br>
> > which connected to filling and completing ring.<br>
> > But I still want to comfirm this with you ? Could you please share your<br>
> > comments ?<br>
> > Thanks in advance.<br>
> <br>
> Hi Christian,<br>
> <br>
> Thanks for your question. Yes, at the moment this configuration is forbidden<br>
> for the AF_XDP PMD. One socket is created for each pair of rx and tx queues.<br>
> However maybe this is an unnecessary restriction of the PMD. It is indeed<br>
> possible to create a socket with either one rxq or one txq. I will put looking<br>
> into the feasibility of enabling this in the PMD on my backlog.<br>
> In the meantime, one workaround you could try would be to create an even<br>
> number of rxq and txqs but steer all traffic to the first rxq using some NIC<br>
> filtering eg. tc.<br>
> <br>
> Thanks,<br>
> Ciara<br>
> <br>
> ><br>
> > Br,<br>
> > Christian<br>
</div>
</span></font></div>
</div>
</body>
</html>