<div dir="ltr">Hello all,<div><br></div><div>I never stop testing to see results.  It has been 10 days. After patching, no leak.</div><div><br></div><div>MBUF_POOL                      82             10,317                     0.79% [|....................]<br>MBUF_POOL                      83             10,316                     0.80% [|....................]<br>MBUF_POOL                      93             10,306                     0.89% [|....................]<br></div><div><br></div><div>Sometimes, it takes time to get back to mempool. In my opinion, it is about the OVS-DPDK/openstack environment issue.  If I have a chance, try to run an Intel Bare-metal environment.</div><div><br></div><div>After meeting with Ferruh, he explained concerns about performance issues so I decided to continue manual patching for my application. </div><div><br></div><div>It is removed from bugzilla.</div><div><br></div><div>For your information.</div><div>Best regards.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Ferruh Yigit <<a href="mailto:ferruh.yigit@amd.com">ferruh.yigit@amd.com</a>>, 19 May 2023 Cum, 21:43 tarihinde şunu yazdı:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 5/19/2023 6:47 PM, Yasin CANER wrote:<br>
> Hello,<br>
> <br>
<br>
Hi,<br>
<br>
Can you please bottom-post, combination of both makes discussion very<br>
hard to follow?<br>
<br>
> I tested all day both before and after patching.<br>
> <br>
> I could not understand that it is a memory leak or not. Maybe it needs<br>
> optimization. You lead, I follow.<br>
> <br>
> 1-) You are right, alloc_q is never bigger than 1024.  But it always<br>
> allocates 32 units then more than 1024 are being freed. Maybe it takes<br>
> time, I don't know.<br>
> <br>
<br>
At least alloc_q is only freed on kni release, so mbufs in that fifo can<br>
sit there as long as application is running.<br>
<br>
> 2-) I tested tx_rs_thresh via ping. After 210 sec , allocated memories<br>
> are back to mempool (most of them). (driver virtio and eth-devices are<br>
> binded via igb_uio) . It really takes time. So it is better to increase<br>
> the size of the mempool.<br>
> (<a href="https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html</a><br>
> <<a href="https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html</a>>)<br>
> <br>
> 3-) try to list mempool state in randomly<br>
> <br>
<br>
It looks number of mbufs used seems increasing, but in worst case both<br>
alloc_q and free_q can be full, which makes 2048 mbufs, and in below<br>
tests used mbufs number is not bigger than this value, so looks OK.<br>
If you run your test for a longer duration, do you observe that used<br>
mbufs going much above this number?<br>
<br>
Also what are the 'num' parameter to 'rte_kni_tx_burst()' API?<br>
If it is bigger than 'MAX_MBUF_BURST_NUM', that may lead mbufs<br>
accumulate at free_q fifo.<br>
<br>
<br>
As experiment, it is possible to decrease KNI fifo sizes, and observe<br>
the result.<br>
<br>
<br>
> Test -1 -) (old code) ICMP testing. The whole mempool size is about<br>
> 10350. So after FIFO reaches max-size -1024, %10 of the size of the<br>
> mempool is in use. But little by little memory is waiting in use and<br>
> doesn't go back to the pool. I could not find the reason.<br>
> <br>
> MBUF_POOL                      448            9,951                    <br>
>  4.31% [|....................]<br>
> MBUF_POOL                      1,947          8,452                    <br>
> 18.72% [||||.................]<br>
> MBUF_POOL                      1,803          8,596                    <br>
> 17.34% [||||.................]<br>
> MBUF_POOL                      1,941          8,458                    <br>
> 18.67% [||||.................]<br>
> MBUF_POOL                      1,900          8,499                    <br>
> 18.27% [||||.................]<br>
> MBUF_POOL                      1,999          8,400                    <br>
> 19.22% [||||.................]<br>
> MBUF_POOL                      1,724          8,675                    <br>
> 16.58% [||||.................]<br>
> MBUF_POOL                      1,811          8,588                    <br>
> 17.42% [||||.................]<br>
> MBUF_POOL                      1,978          8,421                    <br>
> 19.02% [||||.................]<br>
> MBUF_POOL                      2,008          8,391                    <br>
> 19.31% [||||.................]<br>
> MBUF_POOL                      1,854          8,545                    <br>
> 17.83% [||||.................]<br>
> MBUF_POOL                      1,922          8,477                    <br>
> 18.48% [||||.................]<br>
> MBUF_POOL                      1,892          8,507                    <br>
> 18.19% [||||.................]<br>
> MBUF_POOL                      1,957          8,442                    <br>
> 18.82% [||||.................]<br>
> <br>
> Test-2 -) (old code) run iperf3 udp testing that from Kernel to eth<br>
> device. Waited to see what happens in 4 min. memory doesn't go back to<br>
> the mempool. little by little, memory usage increases.<br>
> <br>
> MBUF_POOL                      512            9,887                    <br>
>  4.92% [|....................]<br>
> MBUF_POOL                      1,411          8,988                    <br>
> 13.57% [|||..................]<br>
> MBUF_POOL                      1,390          9,009                    <br>
> 13.37% [|||..................]<br>
> MBUF_POOL                      1,558          8,841                    <br>
> 14.98% [|||..................]<br>
> MBUF_POOL                      1,453          8,946                    <br>
> 13.97% [|||..................]<br>
> MBUF_POOL                      1,525          8,874                    <br>
> 14.66% [|||..................]<br>
> MBUF_POOL                      1,592          8,807                    <br>
> 15.31% [||||.................]<br>
> MBUF_POOL                      1,639          8,760                    <br>
> 15.76% [||||.................]<br>
> MBUF_POOL                      1,624          8,775                    <br>
> 15.62% [||||.................]<br>
> MBUF_POOL                      1,618          8,781                    <br>
> 15.56% [||||.................]<br>
> MBUF_POOL                      1,708          8,691                    <br>
> 16.42% [||||.................]<br>
> iperf is STOPPED to tx_fresh for 4 min<br>
> MBUF_POOL                      1,709          8,690                    <br>
> 16.43% [||||.................]<br>
> iperf is STOPPED to tx_fresh for 4 min<br>
> MBUF_POOL                      1,709          8,690                    <br>
> 16.43% [||||.................]<br>
> MBUF_POOL                      1,683          8,716                    <br>
> 16.18% [||||.................]<br>
> MBUF_POOL                      1,563          8,836                    <br>
> 15.03% [||||.................]<br>
> MBUF_POOL                      1,726          8,673                    <br>
> 16.60% [||||.................]<br>
> MBUF_POOL                      1,589          8,810                    <br>
> 15.28% [||||.................]<br>
> MBUF_POOL                      1,556          8,843                    <br>
> 14.96% [|||..................]<br>
> MBUF_POOL                      1,610          8,789                    <br>
> 15.48% [||||.................]<br>
> MBUF_POOL                      1,616          8,783                    <br>
> 15.54% [||||.................]<br>
> MBUF_POOL                      1,709          8,690                    <br>
> 16.43% [||||.................]<br>
> MBUF_POOL                      1,740          8,659                    <br>
> 16.73% [||||.................]<br>
> MBUF_POOL                      1,546          8,853                    <br>
> 14.87% [|||..................]<br>
> MBUF_POOL                      1,710          8,689                    <br>
> 16.44% [||||.................]<br>
> MBUF_POOL                      1,787          8,612                    <br>
> 17.18% [||||.................]<br>
> MBUF_POOL                      1,579          8,820                    <br>
> 15.18% [||||.................]<br>
> MBUF_POOL                      1,780          8,619                    <br>
> 17.12% [||||.................]<br>
> MBUF_POOL                      1,679          8,720                    <br>
> 16.15% [||||.................]<br>
> MBUF_POOL                      1,604          8,795                    <br>
> 15.42% [||||.................]<br>
> MBUF_POOL                      1,761          8,638                    <br>
> 16.93% [||||.................]<br>
> MBUF_POOL                      1,773          8,626                    <br>
> 17.05% [||||.................]<br>
> <br>
> Test-3 -) (after patching)  run iperf3 udp testing that from Kernel to<br>
> eth device. looks stable.<br>
> After patching ,<br>
> <br>
> MBUF_POOL                      76             10,323                    <br>
> 0.73% [|....................]<br>
> MBUF_POOL                      193            10,206                    <br>
> 1.86% [|....................]<br>
> MBUF_POOL                      96             10,303                    <br>
> 0.92% [|....................]<br>
> MBUF_POOL                      269            10,130                    <br>
> 2.59% [|....................]<br>
> MBUF_POOL                      102            10,297                    <br>
> 0.98% [|....................]<br>
> MBUF_POOL                      235            10,164                    <br>
> 2.26% [|....................]<br>
> MBUF_POOL                      87             10,312                    <br>
> 0.84% [|....................]<br>
> MBUF_POOL                      293            10,106                    <br>
> 2.82% [|....................]<br>
> MBUF_POOL                      99             10,300                    <br>
> 0.95% [|....................]<br>
> MBUF_POOL                      296            10,103                    <br>
> 2.85% [|....................]<br>
> MBUF_POOL                      90             10,309                    <br>
> 0.87% [|....................]<br>
> MBUF_POOL                      299            10,100                    <br>
> 2.88% [|....................]<br>
> MBUF_POOL                      86             10,313                    <br>
> 0.83% [|....................]<br>
> MBUF_POOL                      262            10,137                    <br>
> 2.52% [|....................]<br>
> MBUF_POOL                      81             10,318                    <br>
> 0.78% [|....................]<br>
> MBUF_POOL                      81             10,318                    <br>
> 0.78% [|....................]<br>
> MBUF_POOL                      87             10,312                    <br>
> 0.84% [|....................]<br>
> MBUF_POOL                      252            10,147                    <br>
> 2.42% [|....................]<br>
> MBUF_POOL                      97             10,302                    <br>
> 0.93% [|....................]<br>
> iperf is STOPPED to tx_fresh for 4 min<br>
> MBUF_POOL                      296            10,103                    <br>
> 2.85% [|....................]<br>
> MBUF_POOL                      95             10,304                    <br>
> 0.91% [|....................]<br>
> MBUF_POOL                      269            10,130                    <br>
> 2.59% [|....................]<br>
> MBUF_POOL                      302            10,097                    <br>
> 2.90% [|....................]<br>
> MBUF_POOL                      88             10,311                    <br>
> 0.85% [|....................]<br>
> MBUF_POOL                      305            10,094                    <br>
> 2.93% [|....................]<br>
> MBUF_POOL                      88             10,311                    <br>
> 0.85% [|....................]<br>
> MBUF_POOL                      290            10,109                    <br>
> 2.79% [|....................]<br>
> MBUF_POOL                      84             10,315                    <br>
> 0.81% [|....................]<br>
> MBUF_POOL                      85             10,314                    <br>
> 0.82% [|....................]<br>
> MBUF_POOL                      291            10,108                    <br>
> 2.80% [|....................]<br>
> MBUF_POOL                      303            10,096                    <br>
> 2.91% [|....................]<br>
> MBUF_POOL                      92             10,307                    <br>
> 0.88% [|....................]<br>
> <br>
> <br>
> Best regards.<br>
> <br>
> <br>
> Ferruh Yigit <<a href="mailto:ferruh.yigit@amd.com" target="_blank">ferruh.yigit@amd.com</a> <mailto:<a href="mailto:ferruh.yigit@amd.com" target="_blank">ferruh.yigit@amd.com</a>>>, 18<br>
> May 2023 Per, 17:56 tarihinde şunu yazdı:<br>
> <br>
>     On 5/18/2023 9:14 AM, Yasin CANER wrote:<br>
>     > Hello Ferruh,<br>
>     ><br>
>     > Thanks for your kind response. Also thanks to Stephen.<br>
>     ><br>
>     > Even if 1 packet is consumed from the kernel , each time rx_kni<br>
>     > allocates another 32 units. After a while all mempool is used in<br>
>     alloc_q<br>
>     > from kni. there is not any room for it.<br>
>     ><br>
> <br>
>     What you described continues until 'alloc_q' is full, by default fifo<br>
>     length is 1024 (KNI_FIFO_COUNT_MAX), do you allocate less mbuf in your<br>
>     mempool?<br>
> <br>
>     You can consider either increasing mempool size, or decreasing 'alloc_q'<br>
>     fifo length, but reducing fifo size may cause performance issues so you<br>
>     need to evaluate that option.<br>
> <br>
>     > Do you think my mistake is using one and common mempool usage both kni<br>
>     > and eth?<br>
>     ><br>
> <br>
>     Using same mempool for both is fine.<br>
> <br>
>     > If it needs a separate mempool , i'd like to note in docs.<br>
>     ><br>
>     > Best regards.<br>
>     ><br>
>     > Ferruh Yigit <<a href="mailto:ferruh.yigit@amd.com" target="_blank">ferruh.yigit@amd.com</a> <mailto:<a href="mailto:ferruh.yigit@amd.com" target="_blank">ferruh.yigit@amd.com</a>><br>
>     <mailto:<a href="mailto:ferruh.yigit@amd.com" target="_blank">ferruh.yigit@amd.com</a> <mailto:<a href="mailto:ferruh.yigit@amd.com" target="_blank">ferruh.yigit@amd.com</a>>>>, 17<br>
>     > May 2023 Çar, 20:53 tarihinde şunu yazdı:<br>
>     ><br>
>     >     On 5/9/2023 12:13 PM, Yasin CANER wrote:<br>
>     >     > Hello,<br>
>     >     ><br>
>     >     > I draw a flow via asciiflow to explain myself better.<br>
>     Problem is after<br>
>     >     > transmitting packets(mbufs) , it never puts in the<br>
>     kni->free_q to back<br>
>     >     > to the original pool. Each cycle, it allocates another 32<br>
>     units that<br>
>     >     > cause leaks. Or I am missing something.<br>
>     >     ><br>
>     >     > I already tried the rte_eth_tx_done_cleanup() function but it<br>
>     >     didn't fix<br>
>     >     > anything.<br>
>     >     ><br>
>     >     > I am working on a patch to fix this issue but I am not sure<br>
>     if there<br>
>     >     > is another way.<br>
>     >     ><br>
>     >     > Best regards.<br>
>     >     ><br>
>     >     > <a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a><br>
>     <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a>><br>
>     >     <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a><br>
>     <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a>>><br>
>     >     > <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a><br>
>     <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a>><br>
>     >     <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a><br>
>     <<a href="https://pastebin.ubuntu.com/p/s4h5psqtgZ/" rel="noreferrer" target="_blank">https://pastebin.ubuntu.com/p/s4h5psqtgZ/</a>>>><br>
>     >     ><br>
>     >     ><br>
>     >     > unsigned<br>
>     >     > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,<br>
>     >     unsigned<br>
>     >     > int num)<br>
>     >     > {<br>
>     >     > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);<br>
>     >     ><br>
>     >     > /* If buffers removed, allocate mbufs and then put them into<br>
>     >     alloc_q */<br>
>     >     > /* Question, how to test buffers is removed or not?*/<br>
>     >     > if (ret)<br>
>     >     >     kni_allocate_mbufs(kni);<br>
>     >     ><br>
>     >     > return ret;<br>
>     >     > }<br>
>     >     ><br>
>     ><br>
>     >     Selam Yasin,<br>
>     ><br>
>     ><br>
>     >     You can expect 'kni->alloc_q' fifo to be full, this is not a<br>
>     memory<br>
>     >     leak.<br>
>     ><br>
>     >     As you pointed out, number of mbufs consumed by kernel from<br>
>     'alloc_q'<br>
>     >     and number of mbufs added to 'alloc_q' is not equal and this is<br>
>     >     expected.<br>
>     ><br>
>     >     Target here is to prevent buffer underflow from kernel<br>
>     perspective, so<br>
>     >     it will always have available mbufs for new packets.<br>
>     >     That is why new mbufs are added to 'alloc_q' at worst same or<br>
>     sometimes<br>
>     >     higher rate than it is consumed.<br>
>     ><br>
>     >     You should calculate your mbuf requirement with the assumption<br>
>     that<br>
>     >     'kni->alloc_q' will be full of mbufs.<br>
>     ><br>
>     ><br>
>     >     'kni->alloc_q' is freed when kni is removed.<br>
>     >     Since 'alloc_q' holds physical address of the mbufs, it is a<br>
>     little<br>
>     >     challenging to free them in the userspace, that is why first<br>
>     kernel<br>
>     >     tries to move mbufs to 'kni->free_q' fifo, please check<br>
>     >     'kni_net_release_fifo_phy()' for it.<br>
>     ><br>
>     >     If all moved to 'free_q' fifo, nothing left to in 'alloc_q',<br>
>     but if not,<br>
>     >     userspace frees 'alloc_q' in 'rte_kni_release()', with<br>
>     following call:<br>
>     >     `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);`<br>
>     ><br>
>     ><br>
>     >     I can see you have submitted fixes for this issue, although as I<br>
>     >     explained above I don't think a defect exist, I will review them<br>
>     >     today/tomorrow.<br>
>     ><br>
>     >     Regards,<br>
>     >     Ferruh<br>
>     ><br>
>     ><br>
>     >     > Stephen Hemminger <<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a><br>
>     <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>><br>
>     >     <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a><br>
>     <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>>><br>
>     >     > <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a><br>
>     <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>><br>
>     >     <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a><br>
>     <mailto:<a href="mailto:stephen@networkplumber.org" target="_blank">stephen@networkplumber.org</a>>>>>, 8 May 2023 Pzt, 19:18 tarihinde<br>
>     >     > şunu yazdı:<br>
>     >     ><br>
>     >     >     On Mon, 8 May 2023 09:01:41 +0300<br>
>     >     >     Yasin CANER <<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a><br>
>     <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a>><br>
>     >     <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a> <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a>>><br>
>     <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a> <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a>><br>
>     >     <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a> <mailto:<a href="mailto:yasinncaner@gmail.com" target="_blank">yasinncaner@gmail.com</a>>>>><br>
>     >     >     wrote:<br>
>     >     ><br>
>     >     >     > Hello Stephen,<br>
>     >     >     ><br>
>     >     >     > Thank you for response, it helps me a lot. I<br>
>     understand problem<br>
>     >     >     better.<br>
>     >     >     ><br>
>     >     >     > After reading mbuf library (<br>
>     >     >     ><br>
>     <a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a><br>
>     <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a>><br>
>     >     <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a><br>
>     <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a>>><br>
>     >     >     <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a><br>
>     <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a>><br>
>     >     <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a><br>
>     <<a href="https://doc.dpdk.org/guides/prog_guide/mempool_lib.html" rel="noreferrer" target="_blank">https://doc.dpdk.org/guides/prog_guide/mempool_lib.html</a>>>>)  i<br>
>     >     >     realized that<br>
>     >     >     > 31 units allocation memory slot doesn't return to pool!<br>
>     >     ><br>
>     >     >     If receive burst returns 1 mbuf, the other 31 pointers<br>
>     in the<br>
>     >     array<br>
>     >     >     are not valid. They do not point to mbufs.<br>
>     >     ><br>
>     >     >     > 1 unit mbuf can be freed via rte_pktmbuf_free so it<br>
>     can back<br>
>     >     to pool.<br>
>     >     >     ><br>
>     >     >     > Main problem is that allocation doesn't return to<br>
>     original pool,<br>
>     >     >     act as<br>
>     >     >     > used. So, after following rte_pktmbuf_free<br>
>     >     >     ><br>
>     >     >   <br>
>     >   <br>
>       <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a>>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902</a>>>>><br>
>     >     >     > function,<br>
>     >     >     > i realized that there is 2 function to helps to mbufs back<br>
>     >     to pool.<br>
>     >     >     ><br>
>     >     >     > These are rte_mbuf_raw_free<br>
>     >     >     ><br>
>     >     >   <br>
>     >   <br>
>       <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a>>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a9f188d53834978aca01ea101576d7432</a>>>>><br>
>     >     >     >  and rte_pktmbuf_free_seg<br>
>     >     >     ><br>
>     >     >   <br>
>     >   <br>
>       <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a>>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a>> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a> <<a href="http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37" rel="noreferrer" target="_blank">http://doc.dpdk.org/api/rte__mbuf_8h.html#a006ee80357a78fbb9ada2b0432f82f37</a>>>>>.<br>
>     >     >     > I will focus on them.<br>
>     >     >     ><br>
>     >     >     > If there is another suggestion, I will be very pleased.<br>
>     >     >     ><br>
>     >     >     > Best regards.<br>
>     >     >     ><br>
>     >     >     > Yasin CANER<br>
>     >     >     > Ulak<br>
>     >     ><br>
>     ><br>
> <br>
<br>
</blockquote></div>