[dpdk-users] [dpdk-dev] TX-dropped is high while sending custom packet via testpmd app

Bruce Richardson bruce.richardson at intel.com
Tue Jul 16 11:43:41 CEST 2019


On Tue, Jul 16, 2019 at 03:01:12PM +0530, Nilesh wrote:
> Hello,
>     we are trying to send packets from the testpmd to another machine with
> DPDK running.
>     We are building custom packets before sending it on the wire.
>     After running the application the TX-dropped is quite high (can be
> inferred from following logs : )
> 
> 
> $ sudo ./testpmd -c f -w 01:00.1  --   --nb-cores=2
> --eth-peer=0,A4:BF:01:37:23:AB --rxq=1 --txq=1
> 
> 
> Port statistics ====================================
>  ######################## NIC statistics for port 0 ########################
>  RX-packets: 1445150    RX-missed: 0          RX-bytes:  86709064
>  RX-errors: 0
>  RX-nombuf:  0
>  TX-packets: 1602045    TX-errors: 0          TX-bytes:  134571780
> 
>  Throughput (since last show)
>  Rx-pps:            0
>  Tx-pps:            0
>  ############################################################################
> 
>  ---------------------- Forward statistics for port 0 ----------------------
>  RX-packets: 1445160        RX-dropped: 0             RX-total: 1445160
>  TX-packets: 1602045        TX-dropped: 694971472     TX-total: 696573517
>  ----------------------------------------------------------------------------
> 
>  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>  RX-packets: 1445160        RX-dropped: 0             RX-total: 1445160
>  TX-packets: 1602045        TX-dropped: 694971472     TX-total: 696573517
>  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> 
> What could be the reason for such packet drop? What tuning of buffer/queue
> size affects this ?
> 

Those stats for TX-dropped just look wrong to me. Given that you received
only 1.45 million packets, having dropped nearly 700 million doesn't make
sense. Even the Tx packet counts - though more reasonable - are higher than
the received count.

In terms of testpmd settings, the rxq=1 and txq=1 settings are the default
so aren't needed, and setting the number of forwarding cores to 2 shouldn't
do anything as there is only a single receive queue which can't be shared
among cores.

Regards,
/Bruce


More information about the users mailing list