<div dir="ltr">Hello,<br><br>I have run into an issue since I switched from the failsafe/vdev-netvsc pmd to the netvsc pmd.<br>I
have noticed that once I have first started my port, if I try to stop
and reconfigure it, the call to rte_eth_dev_configure() fails with a
couple of error logged from the netvsc pmd. It can be reproduced quite
easily with testpmd.<br><br>I tried it on my azure set-up: <br><br>root@dut-azure:~# ip -d l<br>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000<br>
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
minmtu 0 maxmtu 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1
gso_max_size 65536 gso_max_segs 65535 <br>2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000<br>
link/ether 00:22:48:39:09:b7 brd ff:ff:ff:ff:ff:ff promiscuity 0
minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64
gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev
00224839-09b7-0022-4839-09b700224839 <br>6: enP27622s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth1 state UP mode DEFAULT group default qlen 1000<br>
link/ether 00:22:48:39:02:ca brd ff:ff:ff:ff:ff:ff promiscuity 0
minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8
gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev
6be6:00:02.0 <br> altname enP27622p0s2<br>7: enP25210s4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth3 state UP mode DEFAULT group default qlen 1000<br>
link/ether 00:22:48:39:0f:cd brd ff:ff:ff:ff:ff:ff promiscuity 0
minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8
gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev
627a:00:02.0 <br> altname enP25210p0s2<br>8: enP16113s3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth2 state UP mode DEFAULT group default qlen 1000<br>
link/ether 00:22:48:39:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 0
minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8
gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev
3ef1:00:02.0 <br> altname enP16113p0s2<br>9: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000<br>
link/ether 00:22:48:39:02:ca brd ff:ff:ff:ff:ff:ff promiscuity 0
minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64
gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev
00224839-02ca-0022-4839-02ca00224839 <br>10: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000<br>
link/ether 00:22:48:39:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 0
minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64
gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev
00224839-0936-0022-4839-093600224839 <br>11: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000<br>
link/ether 00:22:48:39:0f:cd brd ff:ff:ff:ff:ff:ff promiscuity 0
minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64
gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev
00224839-0fcd-0022-4839-0fcd00224839<br><div><br></div><div>As you can see here, I have 3 netvsc interfaces, eth1, eth2 and eth3 with their 3 nic-accelerated counterparts.<br></div>I rebind them to uio_hv_generic and start testpmd:<br><br>root@dut-azure:~# dpdk-testpmd -- -i --rxq=2 --txq=2 --coremask=0x0c --total-num-mbufs=25000<br>EAL: Detected CPU lcores: 8<br>EAL: Detected NUMA nodes: 1<br>EAL: Detected static linkage of DPDK<br>EAL: Multi-process socket /var/run/dpdk/rte/mp_socket<br>EAL: Selected IOVA mode 'PA'<br>EAL: VFIO support initialized<br>mlx5_net: No available register for sampler.<br>mlx5_net: No available register for sampler.<br>mlx5_net: No available register for sampler.<br>hn_vf_attach(): found matching VF port 2<br>hn_vf_attach(): found matching VF port 0<br>hn_vf_attach(): found matching VF port 1<br>Interactive-mode selected<br>previous number of forwarding cores 1 - changed to number of configured cores 2<br>testpmd: create a new mbuf pool <mb_pool_0>: n=25000, size=2176, socket=0<br>testpmd: preferred mempool ops selected: ring_mp_mc<br><br>Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.<br><br>Configuring Port 3 (socket 0)<br>Port 3: 00:22:48:39:02:CA<br>Configuring Port 4 (socket 0)<br>Port 4: 00:22:48:39:09:36<br>Configuring Port 5 (socket 0)<br>Port 5: 00:22:48:39:0F:CD<br>Checking link statuses...<br>Done<br>testpmd><br><div><br></div><div><br></div>The 3 ports are initialized and started correctly. For example, here is the port info for the first port:<br><br>testpmd> show port info 3<br><br>********************* Infos for port 3 *********************<br>MAC address: 00:22:48:39:02:CA<br>Device name: 00224839-02ca-0022-4839-02ca00224839<br>Driver name: net_netvsc<br>Firmware-version: not available<br>Connect to socket: 0<br>memory allocation on the socket: 0<br>Link status: up<br>Link speed: 50 Gbps<br>Link duplex: full-duplex<br>Autoneg status: On<br>MTU: 1500<br>Promiscuous mode: enabled<br>Allmulticast mode: disabled<br>Maximum number of MAC addresses: 1<br>Maximum number of MAC addresses of hash filtering: 0<br>VLAN offload: <br> strip off, filter off, extend off, qinq strip off<br>Hash key size in bytes: 40<br>Redirection table size: 128<br>Supported RSS offload flow types:<br> ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp<br>Minimum size of RX buffer: 1024<br>Maximum configurable length of RX packet: 65536<br>Maximum configurable size of LRO aggregated packet: 0<br>Current number of RX queues: 2<br>Max possible RX queues: 64<br>Max possible number of RXDs per queue: 65535<br>Min possible number of RXDs per queue: 0<br>RXDs number alignment: 1<br>Current number of TX queues: 2<br>Max possible TX queues: 64<br>Max possible number of TXDs per queue: 4096<br>Min possible number of TXDs per queue: 1<br>TXDs number alignment: 1<br>Max segment number per packet: 40<br>Max segment number per MTU/TSO: 40<br>Device capabilities: 0x0( )<br>Device error handling mode: none<br>Device private info:<br> none<br><br> -> First, stop port:<br><br>testpmd> port stop 3<br>Stopping ports...<br>Done<br><br> ->
Then, change something in the port config. This will trigger a call
to rte_eth_dev_configure() on the next port start. Here I change the
link speed/duplex:<br>testpmd> port config 3 speed 10000 duplex full<br>testpmd><br><br> -> Finally, try to start the port:<br><br>testpmd> port start 3<br>Configuring Port 3 (socket 0)<br>hn_nvs_alloc_subchans(): nvs subch alloc failed: 0x2<br>hn_dev_configure(): subchannel configuration failed<br>ETHDEV: Port3 dev_configure = -5<br>Fail to configure port 3 <------<br><br><br><div>As you can see, the port configuration fails.</div><div>The error happens in hn_nvs_alloc_subchans(). Maybe the previous ressources were not properly deallocated on port stop?<br></div><div><br></div>When I looked around in the pmd's code, I noticed the function hn_reinit() with the following commentary:<br><br>/*<br> * Connects EXISTING rx/tx queues to NEW vmbus channel(s), and<br> * re-initializes NDIS and RNDIS, including re-sending initial<br> * NDIS/RNDIS configuration. To be used after the underlying vmbus<br> * has been un- and re-mapped, e.g. as must happen when the device<br> * MTU is changed.<br> */<br><br>This
function shows that it is possible to call rte_eth_dev_configure()
without failing, as this funtion hn_reinit() calls hn_dev_configure(),
and is called when the mtu is changed.<br>I suspect the operations described in comment above might also be needed when running rte_eth_dev_configure().<br><br>Is this a known issue ?<br>In my application case, this bug causes issues when I need to restart and configure the port.<br>Thank you for considering this issue.<br><br>Regards,<br>Edwin Brossette.</div>