IAVF/ICEN Intel 810 (i.e SRIOV case) 16 queue and RSS dpdk-tested report single queue pps / stats
spyroot
spyroot at gmail.com
Fri Apr 18 20:30:43 CEST 2025
Hi Folks,
I am observing that DPDK test-pmd with IAVF PMD, ICEN PF driver,
reporting statistics incorrectly when the RX side generates a UDP flow
that randomizes or increments IP/UDP header data (IP/port, etc).
I tested all 23.x stable and all branches.
-If I use *single* flow (on the TX side, all tuples are the same on the RX
HASH() produce -> same result). no issue.
So, on the RX side, I see all zero packet drops and the correct pps
value reported by test-pmd.
- If I increase the number of flows ( IP/UDP, etc.), the PPS Counter
and byte/pkt counter
report only single queues. (i.e, it looks to me like it uses some
default queue 0
or something and skips the rest 15 (in my case --rxq=16). (It could
IAVF do that or ICEN report that). I'm not sure.
For example, the counter I'm referring to test-pmd Rx-pps counter.
Rx-pps: 4158531 Rx-bps: 2129167832
I'm also observing PMD Failing fetch stats error msg.
iavf_query_stats(): fail to execute command OP_GET_STATS
iavf_dev_stats_get(): Get statistics failed
My Question.
If I have two instances of
test-pmd --allowed X
test-pmd --allowed Y
Where X is VFs PCI ADDR X and Y PCI ADDR Y from the PF?
I expect to see the total stats (pps/bytes, etc.) (combined value for
all 16 queues for a port 0 )
RX-PPS and bytes per port on both instances.
Yes/No?
Has anyone had a similar issue in the past?
Thank you,
MB
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/users/attachments/20250418/1e7d871c/attachment.htm>
More information about the users
mailing list