[dpdk-users] 回覆﹕ Packets drop while fetching with rte_eth_rx_burst

MAC Lee mac_leehk at yahoo.com.hk
Sun Mar 25 13:30:12 CEST 2018

Hi Filip,
    which dpdk version are you using? You can take a look to the source code of dpdk , the rxdrop counter may be not implemented in dpdk. So you always get 0 in rxdrop. 

18/3/25 (週日),Filip Janiszewski <contact at filipjaniszewski.com> 寫道:

 主旨: [dpdk-users] Packets drop while fetching with rte_eth_rx_burst
 收件者: users at dpdk.org
 日期: 2018年3月25日,日,下午6:33
 Hi Everybody,
 I have a weird drop problem, and to
 understand my question the best way
 is to have a look at this simple (and
 cleaned from all the not relevant
 stuff) snippet:
 while( 1 )
     if( config->running ==
 false ) {
     num_of_pkt =
 rte_eth_rx_burst( config->port_id,
     if( unlikely( num_of_pkt
 rx_ring_full = true; //probably not the best name
     if( likely( num_of_pkt
 > 0 ) )
 += num_of_pkt;
 num_of_enq_pkt =
 num_of_enq_pkt == 0 free the mbufs..
 This loop is retrieving packets from
 the device and pushing them into a
 queue for further processing by another
 When I run a test with a Mellanox card
 sending 20M (20878300) packets at
 2.5M p/s the loop seems to miss some
 packets and pk_captured is always
 like 19M or similar.
 rx_ring_full is never true, which means
 that num_of_pkt is always <
 MAX_BURST_DEQ_SIZE, so according to the
 documentation I shall not have
 drops at HW level. Also, num_of_enq_pkt
 is never 0 which means that all
 the packets are enqueued.
 Now, if from that snipped I remove the
 rte_ring_sp_enqueue_bulk call
 (and make sure to release all the
 mbufs) then pk_captured is always
 exactly equal to the amount of packets
 I've send to the NIC.
 So it seems (but I cant deal with this
 idea) that
 rte_ring_sp_enqueue_bulk is somehow too
 slow and between one call to
 rte_eth_rx_burst and another some
 packets are dropped due to full ring
 on the NIC, but, why num_of_pkt (from
 rte_eth_rx_burst) is always
 smaller than MAX_BURST_DEQ_SIZE (much
 smaller) as if there was always
 sufficient room for the packets?
 Is anybody able to help me understand
 what's happening here?
 Note, MAX_BURST_DEQ_SIZE is 512.

More information about the users mailing list