[dpdk-users] mlx5: packets lost between good+discard and phy counters

Gerry Wan gerryw at stanford.edu
Sun Apr 11 03:31:10 CEST 2021


After further investigation, I think this may be a bug introduced in DPDK
v20.11, where these "lost" packets should be counted as "rx_out_of_buffer"
and "rx_missed_errors". On v20.08 both of these counters increment, but on
v20.11 and v21.02 these counters always remain 0.

Any workarounds for this? This is an important statistic for my use case.

On Fri, Apr 2, 2021 at 5:03 PM Gerry Wan <gerryw at stanford.edu> wrote:

> I have a simple forwarding experiment using a mlx5 NIC directly connected
> to a generator. I am noticing that at high enough throughput,
> rx_good_packets + rx_phy_discard_packets may not equal rx_phy_packets.
> Where are these packets being dropped?
>
> Below is an example xstats where I receive at almost the limit of what my
> application can handle with no loss. It shows rx_phy_discard_packets is 0
> but the number actually received by the CPU is less than rx_phy_packets.
> rx_out_of_buffer and other errors are also 0.
>
> I have disabled Ethernet flow control via rte_eth_dev_flow_ctrl_set with
> mode = RTE_FC_NONE, if that matters.
>
> {
>     "rx_good_packets": 319992439,
>     "tx_good_packets": 0,
>     "rx_good_bytes": 19199546340,
>     "tx_good_bytes": 0,
>     "rx_missed_errors": 0,
>     "rx_errors": 0,
>     "tx_errors": 0,
>     "rx_mbuf_allocation_errors": 0,
>     "rx_q0_packets": 319992439,
>     "rx_q0_bytes": 19199546340,
>     "rx_q0_errors": 0,
>     "rx_wqe_errors": 0,
>     "rx_unicast_packets": 319999892,
>     "rx_unicast_bytes": 19199993520,
>     "tx_unicast_packets": 0,
>     "tx_unicast_bytes": 0,
>     "rx_multicast_packets": 0,
>     "rx_multicast_bytes": 0,
>     "tx_multicast_packets": 0,
>     "tx_multicast_bytes": 0,
>     "rx_broadcast_packets": 0,
>     "rx_broadcast_bytes": 0,
>     "tx_broadcast_packets": 0,
>     "tx_broadcast_bytes": 0,
>     "tx_phy_packets": 0,
>     "rx_phy_packets": 319999892,
>     "rx_phy_crc_errors": 0,
>     "tx_phy_bytes": 0,
>     "rx_phy_bytes": 20479993088,
>     "rx_phy_in_range_len_errors": 0,
>     "rx_phy_symbol_errors": 0,
>     "rx_phy_discard_packets": 0,
>     "tx_phy_discard_packets": 0,
>     "tx_phy_errors": 0,
>     "rx_out_of_buffer": 0,
>     "tx_pp_missed_interrupt_errors": 0,
>     "tx_pp_rearm_queue_errors": 0,
>     "tx_pp_clock_queue_errors": 0,
>     "tx_pp_timestamp_past_errors": 0,
>     "tx_pp_timestamp_future_errors": 0,
>     "tx_pp_jitter": 0,
>     "tx_pp_wander": 0,
>     "tx_pp_sync_lost": 0,
> }
>
>


More information about the users mailing list