<div dir="ltr"><div>Hi Stephen,</div><div><br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
NAK<br>
Doing this risks having a CPU lockup if userspace does not keep up<br>
or the DPDK application gets stuck.<br>
<br>
There are better ways to solve the TCP stack queue overrun issue:<br>
1. Use a better queueing discipline on the kni device. The Linux default<br>
of pfifo_fast has bufferbloat issues. Use fq_codel, fq, codel or pie?<br>
2. KNI should implement BQL so that TCP stack can see lock backpressure<br>
about possible queue depth.<br>
<br></blockquote><div><br></div><div><div>Thanks for the suggestions. <br></div><div></div><div>I agree that we risk a lockup, in case the DPDK app gets stuck.</div><div><br></div><div>Indeed, I am running on an older Linux kernel, and the default queuing discipline is pfifo_fast.</div><div>I'll experiment with the queuing disciplines you recommended.</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
As a simple workaround increase the KNI ring size. It won't solve the whole<br>
problem but i tcan help<br></blockquote><div><br></div><div></div><div>I obtained moderate success with increasing MAX_MBUF_BURST_NUM from 32 to 1024 in librte_kni.</div><div>I'm not sure if such a change would be upstreamable. Perhaps it needs a bit of testing.<br></div><div><br></div><div>I'll drop the current patch.<br></div><div></div><div> <br></div></div></div>