[dpdk-dev] net/mlx5: mellanox cx5/cx6-dx increment "rx_phy_discard_packets" with DPDK rte_flow actions COUNT & DROP

Slava Ovsiienko viacheslavo at nvidia.com
Thu May 6 12:55:17 CEST 2021


Hi, Kang

There are have some questions to clarify:
 - what Is packet packet rate (in packet-per-second)?

  *   what Is packet size?
  *   do you use the switchdev configuration (E-Switch)?
  *   could you try create all flows in group 1 (and have the first flow in group 0 forwarding all the traffic to group 1) ?

With best regards,
Slava

From: Arthas <kangzy1982 at qq.com>
Sent: Thursday, May 6, 2021 11:00
To: Gregory Etelson <getelson at nvidia.com>; dev at dpdk.org
Cc: Matan Azrad <matan at nvidia.com>; Ori Kam <orika at nvidia.com>; Raslan Darawsheh <rasland at nvidia.com>; stable at dpdk.org; Slava Ovsiienko <viacheslavo at nvidia.com>; Shahaf Shuler <shahafs at nvidia.com>
Subject: [dpdk-dev] net/mlx5: mellanox cx5/cx6-dx increment "rx_phy_discard_packets" with DPDK rte_flow actions COUNT & DROP

Hareware:  CX5/CX6 DX + Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz
DPDK version: 19.11.8/20.11/21.05-rc1&2

testpmd with case:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / drop / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp src is 53 / end actions count / drop / end
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end actions count /drop end
testpmd> flow list 0
ID         Group   Prio        Attr        Rule
0           0              0              i--            ETH IPV4 UDP => COUNT DROP
1           0              0              i--            ETH IPV4 UDP => COUNT DROP
2           0              0              i--            ETH IPV4 UDP => COUNT DROP
or
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / rss / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp src is 53 / end actions count / rss / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end actions count / rss / end
testpmd> flow list 0
ID         Group   Prio        Attr        Rule
0           0              0              i--            ETH IPV4 UDP => COUNT RSS
1           0              0              i--            ETH IPV4 UDP => COUNT RSS
2           0              0              i--            ETH IPV4 UDP => COUNT RSS
testpmd>

as soon as NIC create more than 1 flow ,  CX5/CX6-dx NIC will increment 'rx_phy_discard_packets'.
only 1 flow no problem!

Is this a CX5/CX6-DX hardware issue?
or Is it a DPDK mlx5 pmd bugs?

Best Regards!
KANG


More information about the dev mailing list