<div dir="ltr"><div>Hello,</div><div><br></div><div>I noticed a small inconsistency in the virtio pmd's xstats.</div><div>The stat "rx_q0_errors" appears twice.</div><div>I
also think the stats "rx_q0_packets", "rx_q0_bytes", "tx_q0_packets"
and "tx_q0_bytes" are duplicates of "rx_q0_good_packets",
"rx_q0_good_bytes", "tx_q0_good_packets" and "tx_q0_good_bytes"</div><div><br></div><div>I believe this issue probably appeared after this commit:</div><div><br></div><div>f30e69b41f94: ethdev: add device flag to bypass auto-filled queue xstats</div><div><a href="http://scm.6wind.com/vendor/dpdk.org/dpdk/commit/?id=f30e69b41f949cd4a9afb6ff39de196e661708e2" target="_blank">http://scm.6wind.com/vendor/dpdk.org/dpdk/commit/?id=f30e69b41f949cd4a9afb6ff39de196e661708e2</a></div><div><br></div><div>From
what I understand, the rxq0_error stat was originally reported by the
librte. However, changes were made so it is reported by the pmd instead.</div><div>The flag RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS was temporarily set to keep the old behaviour so that every pmd could have time to adapt the change.</div><div>But
it seems the flag was forgotten in the virtio pmd and as a result, some
stats are fetched at two different times when displaying xstats.</div><div><br></div><div>First in lib_rte_ethdev:</div><div><a href="https://git.dpdk.org/dpdk/tree/lib/ethdev/rte_ethdev.c#n3266" target="_blank">https://git.dpdk.org/dpdk/tree/lib/ethdev/rte_ethdev.c#n3266</a></div><div>(you can see the check on the RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS flag before the snprintf on <code>eth_dev_rxq_stats_strings[]</code> )</div><div><br></div><div>And a second time in the virtio pmd:</div><div><a href="https://git.dpdk.org/dpdk/tree/drivers/net/virtio/virtio_ethdev.c#n705" target="_blank">https://git.dpdk.org/dpdk/tree/drivers/net/virtio/virtio_ethdev.c#n705</a> pmd</div><div>(see the snprintf on<code> rte_virtio_rxq_stat_strings</code>[] )<br></div><div><br></div><div>This problem can be reproduced on testpmd simply by displaying the xstats on a port with the net_virtio driver:</div><div><br></div><div>Reproduction:</div><div>===========</div><div><br></div><div> 1) start dpdk-testpmd:<br><br>modprobe -a uio_pci_generic<br>dpdk-devbind -b uio_pci_generic 03:00.0<br>dpdk-devbind -b uio_pci_generic 04:00.0<br><br>dpdk-devbind -s<br><br>Network devices using DPDK-compatible driver<br>============================================<br>0000:03:00.0 'Virtio 1.0 network device 1041' drv=uio_pci_generic unused=vfio-pci<br>0000:04:00.0 'Virtio 1.0 network device 1041' drv=uio_pci_generic unused=vfio-pci<br>[...]<br><br>dpdk-testpmd -a 0000:03:00.0 -a 0000:04:00.0 -- -i --rxq=1 --txq=1 --coremask=0x4 --total-num-mbufs=250000<br>EAL: Detected CPU lcores: 3<br>EAL: Detected NUMA nodes: 1<br>EAL: Detected static linkage of DPDK<br>EAL: Multi-process socket /var/run/dpdk/rte/mp_socket<br>EAL: Selected IOVA mode 'PA'<br>EAL: VFIO support initialized<br>EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:03:00.0 (socket -1)<br>EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:04:00.0 (socket -1)<br>Interactive-mode selected<br>Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.<br>testpmd: create a new mbuf pool <mb_pool_0>: n=250000, size=2176, socket=0<br>testpmd: preferred mempool ops selected: ring_mp_mc<br>Configuring Port 0 (socket 0)<br>Port 0: 52:54:00:B0:8F:88<br>Configuring Port 1 (socket 0)<br>Port 1: 52:54:00:EF:09:1F<br>Checking link statuses...<br>Done<br><br> 2) port info:<br><br>show port info 0<br><br>********************* Infos for port 0 *********************<br>MAC address: 52:54:00:B0:8F:88<br>Device name: 0000:03:00.0<br>Driver name: net_virtio<br>Firmware-version: not available<br>Devargs: <br>Connect to socket: 0<br>memory allocation on the socket: 0<br>Link status: up<br>Link speed: Unknown<br>Link duplex: full-duplex<br>Autoneg status: On<br>MTU: 1500<br>Promiscuous mode: enabled<br>Allmulticast mode: disabled<br>Maximum number of MAC addresses: 64<br>Maximum number of MAC addresses of hash filtering: 0<br>VLAN offload: <br> strip off, filter off, extend off, qinq strip off<br>No RSS offload flow type is supported.<br>Minimum size of RX buffer: 64<br>Maximum configurable length of RX packet: 9728<br>Maximum configurable size of LRO aggregated packet: 0<br>Current number of RX queues: 1<br>Max possible RX queues: 1<br>Max possible number of RXDs per queue: 32768<br>Min possible number of RXDs per queue: 32<br>RXDs number alignment: 1<br>Current number of TX queues: 1<br>Max possible TX queues: 1<br>Max possible number of TXDs per queue: 32768<br>Min possible number of TXDs per queue: 32<br>TXDs number alignment: 1<br>Max segment number per packet: 65535<br>Max segment number per MTU/TSO: 65535<br>Device capabilities: 0x0( )<br>Device error handling mode: none<br>Device private info:<br>guest_features: 0x110af8020<br>vtnet_hdr_size: 12<br>use_vec: rx-0 tx-0<br>use_inorder: rx-0 tx-0<br>intr_lsc: 1<br>max_mtu: 9698<br>max_rx_pkt_len: 1530<br>max_queue_pairs: 1<br>req_guest_features: 0x8000005f10ef8028<br><br> 3) show port xstats:<br><br>show port xstats 0<br>###### NIC extended statistics for port 0 <br>rx_good_packets: 0<br>tx_good_packets: 0<br>rx_good_bytes: 0<br>tx_good_bytes: 0<br>rx_missed_errors: 0<br>rx_errors: 0<br>tx_errors: 0<br>rx_mbuf_allocation_errors: 0<br>rx_q0_packets: 0<br>rx_q0_bytes: 0<br>rx_q0_errors: 0 <==================<br>tx_q0_packets: 0<br>tx_q0_bytes: 0<br>rx_q0_good_packets: 0<br>rx_q0_good_bytes: 0<br>rx_q0_errors: 0 <==================<br>rx_q0_multicast_packets: 0<br>rx_q0_broadcast_packets: 0<br>rx_q0_undersize_packets: 0<br>rx_q0_size_64_packets: 0<br>rx_q0_size_65_127_packets: 0<br>rx_q0_size_128_255_packets: 0<br>rx_q0_size_256_511_packets: 0<br>rx_q0_size_512_1023_packets: 0<br>rx_q0_size_1024_1518_packets: 0<br>rx_q0_size_1519_max_packets: 0<br>tx_q0_good_packets: 0<br>tx_q0_good_bytes: 0<br>tx_q0_multicast_packets: 0<br>tx_q0_broadcast_packets: 0<br>tx_q0_undersize_packets: 0<br>tx_q0_size_64_packets: 0<br>tx_q0_size_65_127_packets: 0<br>tx_q0_size_128_255_packets: 0<br>tx_q0_size_256_511_packets: 0<br>tx_q0_size_512_1023_packets: 0<br>tx_q0_size_1024_1518_packets: 0<br>tx_q0_size_1519_max_packets: 0</div><div><br></div><div>You can see the stat "rx_q0_errors" appeared twice.</div></div>