[dpdk-dev] [Bug 444] DPDK fails to receive packets in Azure when using more than 3 transmit queues
bugzilla at dpdk.org
bugzilla at dpdk.org
Mon Apr 6 22:01:36 CEST 2020
https://bugs.dpdk.org/show_bug.cgi?id=444
Bug ID: 444
Summary: DPDK fails to receive packets in Azure when using more
than 3 transmit queues
Product: DPDK
Version: 19.11
Hardware: All
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: core
Assignee: dev at dpdk.org
Reporter: christopher.swindle at metaswitch.com
Target Milestone: ---
I have setup two accelerated interfaces on Azure for use with DPDK and am using
the failsafe driver to bind to the interfaces, I find that if I set to use more
than 3 transmit queues then one of the interfaces will fail to receive packets
(even though there are no error shown).
Below is an example of the config that I use for a working case:
/usr/bin/testpmd -n 2 -w 0002:00:02.0 -w 0003:00:02.0
--vdev="net_vdev_netvsc0,iface=eth1" --vdev="net_vdev_netvsc1,iface=eth2"
-- --forward-mode=rxonly --nb-cores 1 --stats-period 1 --txq 3
If I change just the txq from 3 to 4, then rx packet counts do not go up for
both interfaces. If I increase this to 10 transmit queues then both interfaces
fail to receive packets.
I am currently running using the AKS image with the mxl4_ib module loaded
(although the same issue can also be seen when using ConnectX 5 devices in
Azure). As it seemed possible that it may have been a kernel issue, I also
updated my kernel to 5.0.0-1032-azure and the same behaviour is also seen.
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the dev
mailing list