[dpdk-users] occasionally traffic stalls due to rx and tx descriptor not available

Hui Liu huiliu0213 at gmail.com
Fri Jul 6 05:36:51 CEST 2018

Hi Amar,

I'm a DPDK newbie and I saw a similar problem recently on one 82599 port.
My app is doing a job like this:
1. TX thread calls rte_pktmbuf_alloc() to allocate buffers from mbuf_pool
and fills it as ICMP packet and sends out, with speed of around 400,000
packets/sec, 1.6Gbps;
2. RX thread receives ICMP responses and worker threads work with the

This app was running fine for some time, typically from 8 hours to 5 days
randomly, then it goes into a bad state, that TX thread could not send
packets out any more via rte_eth_tx_buffer() or rte_eth_tx_buffer_flush()
while rte_eth_tx_buffer_count_callback() is called for all packets flush.
I'm highly suspecting the problem with descriptor exhausted but not get it
clear yet..

In my app, I set max pkt burst as 256, rx descriptor as 2048, tx descriptor
as 4096 with single rx/tx queue for one port to get good performance, not
sure if they are the best combination. Just FYI. For descriptor problem,
I'm still investigating on what kind of behavior/condition takes
descriptors and never release it, just as your Query 2. If applicable,
would you please let me know if there is a way to get the number of
available tx/rx descriptor of ports and I could see when descriptors are
really taken without being released time by time?

Due to my system environment limit, I'm not able to directly attach gdb to
debug... While I'm investigating this problem, would you please update me
when you have any clue on your issue and I might get some inspiration from

Thank you very much!


On Thu, Jul 5, 2018 at 4:34 AM, Amarnath Nallapothula <
Amarnath.Nallapothula at riverbed.com> wrote:

> Hi Experts,
> I am testing performance of my dpdk based application which forwards
> packets from port 1 to port 2 of 40G NIC card and via versa.Occasionally we
> see that packets rx and tx stops on one of the port. I looked through the
> dpdk’s fm10k driver’s code and found out that this could happen if rx/tx
> descriptors are not available.
> To improve performance, I am using RSS functionality and created five rx
> and tx queue. Dedicated lcores are assigned to forward packets from port1
> queue 0 to port2 queue 0 and via versa.
> During port initialization rx_queue is initialized with 128 Rx ring
> descriptor size and tx_queue  is initialized 512 Tx ring descriptor.
> Threshold values are left default.
> I have few queries here:
>   1.  Is above initialization value for rx and tx descriptor is good for
> each queue for given port.
>   2.  Under what conditions rx and tx descriptor gets exhausted?
>   3.  Any suggestion or information you can provide to debug this issue?
> Regards,
> Amar

More information about the users mailing list