<div dir="ltr">Hello,<div><br></div><div>I tested all day both before and after patching.</div><div><br></div><div>I could not understand that it is a memory leak or not. Maybe it needs optimization. You lead, I follow.</div><div><br></div><div>1-) You are right, alloc_q is never bigger than 1024. But it always allocates 32 units then more than 1024 are being freed. Maybe it takes time, I don't know.<br></div><div><br></div><div>2-) I tested tx_rs_thresh via ping. After 210 sec , allocated memories are back to mempool (most of them). (driver virtio and eth-devices are binded via igb_uio) . It really takes time. So it is better to increase the size of the mempool. (<a href="https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html">https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html</a>)</div><div><br></div><div>3-) try to list mempool state in randomly</div><div><br></div><div>Test -1 -) (old code) ICMP testing. The whole mempool size is about 10350. So after FIFO reaches max-size -1024, %10 of the size of the mempool is in use. But little by little memory is waiting in use and doesn't go back to the pool. I could not find the reason.</div><div><br></div><div>MBUF_POOL 448 9,951 4.31% [|....................]<br>MBUF_POOL 1,947 8,452 18.72% [||||.................]<br>MBUF_POOL 1,803 8,596 17.34% [||||.................]<br>MBUF_POOL 1,941 8,458 18.67% [||||.................]<br>MBUF_POOL 1,900 8,499 18.27% [||||.................]<br>MBUF_POOL 1,999 8,400 19.22% [||||.................]<br>MBUF_POOL 1,724 8,675 16.58% [||||.................]<br>MBUF_POOL 1,811 8,588 17.42% [||||.................]<br>MBUF_POOL 1,978 8,421 19.02% [||||.................]<br>MBUF_POOL 2,008 8,391 19.31% [||||.................]<br>MBUF_POOL 1,854 8,545 17.83% [||||.................]<br>MBUF_POOL 1,922 8,477 18.48% [||||.................]<br>MBUF_POOL 1,892 8,507 18.19% [||||.................]<br>MBUF_POOL 1,957 8,442 18.82% [||||.................]<br></div><div><br></div><div>Test-2 -) (old code) run iperf3 udp testing that from Kernel to eth device. Waited to see what happens in 4 min. memory doesn't go back to the mempool. little by little, memory usage increases.</div><div><br></div><div>MBUF_POOL 512 9,887 4.92% [|....................]<br>MBUF_POOL 1,411 8,988 13.57% [|||..................]<br>MBUF_POOL 1,390 9,009 13.37% [|||..................]<br>MBUF_POOL 1,558 8,841 14.98% [|||..................]<br>MBUF_POOL 1,453 8,946 13.97% [|||..................]<br>MBUF_POOL 1,525 8,874 14.66% [|||..................]<br>MBUF_POOL 1,592 8,807 15.31% [||||.................]<br>MBUF_POOL 1,639 8,760 15.76% [||||.................]<br>MBUF_POOL 1,624 8,775 15.62% [||||.................]<br>MBUF_POOL 1,618 8,781 15.56% [||||.................]<br>MBUF_POOL 1,708 8,691 16.42% [||||.................]<br>iperf is STOPPED to tx_fresh for 4 min<br>MBUF_POOL 1,709 8,690 16.43% [||||.................]<br>iperf is STOPPED to tx_fresh for 4 min<br>MBUF_POOL 1,709 8,690 16.43% [||||.................]<br>MBUF_POOL 1,683 8,716 16.18% [||||.................]<br>MBUF_POOL 1,563 8,836 15.03% [||||.................]<br>MBUF_POOL 1,726 8,673 16.60% [||||.................]<br>MBUF_POOL 1,589 8,810 15.28% [||||.................]<br>MBUF_POOL 1,556 8,843 14.96% [|||..................]<br>MBUF_POOL 1,610 8,789 15.48% [||||.................]<br>MBUF_POOL 1,616 8,783 15.54% [||||.................]<br>MBUF_POOL 1,709 8,690 16.43% [||||.................]<br>MBUF_POOL 1,740 8,659 16.73% [||||.................]<br>MBUF_POOL 1,546 8,853 14.87% [|||..................]<br>MBUF_POOL 1,710 8,689 16.44% [||||.................]<br>MBUF_POOL 1,787 8,612 17.18% [||||.................]<br>MBUF_POOL 1,579 8,820 15.18% [||||.................]<br>MBUF_POOL 1,780 8,619 17.12% [||||.................]<br>MBUF_POOL 1,679 8,720 16.15% [||||.................]<br>MBUF_POOL 1,604 8,795 15.42% [||||.................]<br>MBUF_POOL 1,761 8,638 16.93% [||||.................]<br>MBUF_POOL 1,773 8,626 17.05% [||||.................]<br></div><div><br></div><div>Test-3 -) (after patching) run iperf3 udp testing that from Kernel to eth device. looks stable.</div><div>After patching ,<br><br>MBUF_POOL 76 10,323 0.73% [|....................]<br>MBUF_POOL 193 10,206 1.86% [|....................]<br>MBUF_POOL 96 10,303 0.92% [|....................]<br>MBUF_POOL 269 10,130 2.59% [|....................]<br>MBUF_POOL 102 10,297 0.98% [|....................]<br>MBUF_POOL 235 10,164 2.26% [|....................]<br>MBUF_POOL 87 10,312 0.84% [|....................]<br>MBUF_POOL 293 10,106 2.82% [|....................]<br>MBUF_POOL 99 10,300 0.95% [|....................]<br>MBUF_POOL 296 10,103 2.85% [|....................]<br>MBUF_POOL 90 10,309 0.87% [|....................]<br>MBUF_POOL 299 10,100 2.88% [|....................]<br>MBUF_POOL 86 10,313 0.83% [|....................]<br>MBUF_POOL 262 10,137 2.52% [|....................]<br>MBUF_POOL 81 10,318 0.78% [|....................]<br>MBUF_POOL 81 10,318 0.78% [|....................]<br>MBUF_POOL 87 10,312 0.84% [|....................]<br>MBUF_POOL 252 10,147 2.42% [|....................]<br>MBUF_POOL 97 10,302 0.93% [|....................]<br>iperf is STOPPED to tx_fresh for 4 min<br>MBUF_POOL 296 10,103 2.85% [|....................]<br>MBUF_POOL 95 10,304 0.91% [|....................]<br>MBUF_POOL 269 10,130 2.59% [|....................]<br>MBUF_POOL 302 10,097 2.90% [|....................]<br>MBUF_POOL 88 10,311 0.85% [|....................]<br>MBUF_POOL 305 10,094 2.93% [|....................]<br>MBUF_POOL 88 10,311 0.85% [|....................]<br>MBUF_POOL 290 10,109 2.79% [|....................]<br>MBUF_POOL 84 10,315 0.81% [|....................]<br>MBUF_POOL 85 10,314 0.82% [|....................]<br>MBUF_POOL 291 10,108 2.80% [|....................]<br>MBUF_POOL 303 10,096 2.91% [|....................]<br>MBUF_POOL 92 10,307 0.88% [|....................]<br></div><div><br></div><div><br></div><div>Best regards.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Ferruh Yigit <<a href="mailto:ferruh.yigit@amd.com">ferruh.yigit@amd.com</a>>, 18 May 2023 Per, 17:56 tarihinde şunu yazdı:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 5/18/2023 9:14 AM, Yasin CANER wrote:<br>
> Hello Ferruh,<br>
> <br>
> Thanks for your kind response. Also thanks to Stephen.<br>
> <br>
> Even if 1 packet is consumed from the kernel , each time rx_kni<br>
> allocates another 32 units. After a while all mempool is used in alloc_q<br>
> from kni. there is not any room for it.<br>
> <br>
<br>
What you described continues until 'alloc_q' is full, by default fifo<br>
length is 1024 (KNI_FIFO_COUNT_MAX), do you allocate less mbuf in your<br>
mempool?<br>
<br>
You can consider either increasing mempool size, or decreasing 'alloc_q'<br>
fifo length, but reducing fifo size may cause performance issues so you<br>
need to evaluate that option.<br>
<br>
> Do you think my mistake is using one and common mempool usage both kni<br>
> and eth?<br>
> <br>
<br>
Using same mempool for both is fine.<br>
<br>
> If it needs a separate mempool , i'd like to note in docs.<br>
> <br>
> Best regards.<br>
> <br>
> Ferruh Yigit <<a href="mailto:ferruh.yigit@amd.com" target="_blank">ferruh.yigit@amd.com</a> <mailto:<a href="mailto:ferruh.yigit@amd.com" target="_blank">ferruh.yigit@amd.com</a>>>, 17<br>
> May 2023 Çar, 20:53 tarihinde şunu yazdı:<br>
> <br>
> On 5/9/2023 12:13 PM, Yasin CANER wrote:<br>
> > Hello,<br>
> ><br>
> > I draw a flow via asciiflow to explain myself better. Problem is after<br>
> > transmitting packets(mbufs) , it never puts in the kni->free_q to back<br>
> > to the original pool. Each cycle, it allocates another 32 units that<br>
> > cause leaks. Or I am missing something.<br>
> ><br>
> > I already tried the rte_eth_tx_done_cleanup() function but it<br>
> didn't fix<br>
> > anything.<br>
> ><br>
> > I am working on a patch to fix this issue but I am not sure if there<br>
> > is another way.<br>
> ><br>
> > Best regards.<br>
> ><br>
> > <a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a><br>
> <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a>><br>
> > <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a><br>
> <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a>>><br>
> ><br>
> ><br>
> > unsigned<br>
> > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,<br>
> unsigned<br>
> > int num)<br>
> > {<br>
> > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);<br>
> ><br>
> > /* If buffers removed, allocate mbufs and then put them into<br>
> alloc_q */<br>
> > /* Question, how to test buffers is removed or not?*/<br>
> > if (ret)<br>
> > kni_allocate_mbufs(kni);<br>
> ><br>
> > return ret;<br>
> > }<br>
> ><br>
> <br>
> Selam Yasin,<br>
> <br>
> <br>
> You can expect 'kni->alloc_q' fifo to be full, this is not a memory<br>
> leak.<br>
> <br>
> As you pointed out, number of mbufs consumed by kernel from 'alloc_q'<br>
> and number of mbufs added to 'alloc_q' is not equal and this is<br>
> expected.<br>
> <br>
> Target here is to prevent buffer underflow from kernel perspective, so<br>
> it will always have available mbufs for new packets.<br>
> That is why new mbufs are added to 'alloc_q' at worst same or sometimes<br>
> higher rate than it is consumed.<br>
> <br>
> You should calculate your mbuf requirement with the assumption that<br>
> 'kni->alloc_q' will be full of mbufs.<br>
> <br>
> <br>
> 'kni->alloc_q' is freed when kni is removed.<br>
> Since 'alloc_q' holds physical address of the mbufs, it is a little<br>
> challenging to free them in the userspace, that is why first kernel<br>
> tries to move mbufs to 'kni->free_q' fifo, please check<br>
> 'kni_net_release_fifo_phy()' for it.<br>
> <br>
> If all moved to 'free_q' fifo, nothing left to in 'alloc_q', but if not,<br>
> userspace frees 'alloc_q' in 'rte_kni_release()', with following call:<br>
> `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`<br>
> <br>
> <br>
> I can see you have submitted fixes for this issue, although as I<br>
> explained above I don't think a defect exist, I will review them<br>
> today/tomorrow.<br>
> <br>
> Regards,<br>
> Ferruh<br>
> <br>
> <br>
> > Stephen Hemminger <<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a><br>
> <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>><br>
> > <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a><br>
> <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>>>>, 8 May 2023 Pzt, 19:18 tarihinde<br>
> > şunu yazdı:<br>
> ><br>
> > On Mon, 8 May 2023 09:01:41 +0300<br>
> > Yasin CANER <<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a><br>
> <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a>> <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a><br>
> <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a>>>><br>
> > wrote:<br>
> ><br>
> > > Hello Stephen,<br>
> > ><br>
> > > Thank you for response, it helps me a lot. I understand problem<br>
> > better.<br>
> > ><br>
> > > After reading mbuf library (<br>
> > > <a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a><br>
> <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a>><br>
> > <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a><br>
> <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a>>>) i<br>
> > realized that<br>
> > > 31 units allocation memory slot doesn't return to pool!<br>
> ><br>
> > If receive burst returns 1 mbuf, the other 31 pointers in the<br>
> array<br>
> > are not valid. They do not point to mbufs.<br>
> ><br>
> > > 1 unit mbuf can be freed via rte_pktmbuf_free so it can back<br>
> to pool.<br>
> > ><br>
> > > Main problem is that allocation doesn't return to original pool,<br>
> > act as<br>
> > > used. So, after following rte_pktmbuf_free<br>
> > ><br>
> > <br>
> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a>>>><br>
> > > function,<br>
> > > i realized that there is 2 function to helps to mbufs back<br>
> to pool.<br>
> > ><br>
> > > These are rte_mbuf_raw_free<br>
> > ><br>
> > <br>
> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a>>>><br>
> > > and rte_pktmbuf_free_seg<br>
> > ><br>
> > <br>
> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a>>>>.<br>
> > > I will focus on them.<br>
> > ><br>
> > > If there is another suggestion, I will be very pleased.<br>
> > ><br>
> > > Best regards.<br>
> > ><br>
> > > Yasin CANER<br>
> > > Ulak<br>
> ><br>
> <br>
<br>
</blockquote></div>