[dpdk-dev] [Bug 400] start testpmd with vmxnet3 can't receive and forward packets

bugzilla at dpdk.org bugzilla at dpdk.org
Wed Feb 19 03:53:28 CET 2020


https://bugs.dpdk.org/show_bug.cgi?id=400

            Bug ID: 400
           Summary: start testpmd with vmxnet3 can't receive and forward
                    packets
           Product: DPDK
           Version: 19.05
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: testpmd
          Assignee: dev at dpdk.org
          Reporter: hailinx.xu at intel.com
  Target Milestone: ---

1. install vmware exsi 6.7.0
2. install centoe 7.6 in vmware
3. reboot vmware after add new switch for vmxnet3
4. start centos 7.6 after Assign switch to centos 7.6(After booting, you can
see the assigned network card)
[root at localhost dpdk]# ./usertools/dpdk-devbind.py -s
Network devices using kernel driver
===================================
0000:03:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens160 drv=vmxnet3
unused=igb_uio *Active*
0000:13:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens224 drv=vmxnet3
unused=igb_uio
5. the NIC bind igb_uio 
[root at localhost dpdk]# ./usertools/dpdk-devbind.py -b igb_uio 0000:13:00.0
6. start testpmd
[root at localhost dpdk]# ./x86_64-native-linuxapp-gcc/app/testpmd -c 6 -n 4 -- -i
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
for that size
EAL: Probing VFIO support...
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will
pair with itself.

Configuring Port 0 (socket 0)
vmxnet3_v4_rss_configure(): Set RSS fields (v4) failed: 1
vmxnet3_dev_start(): Failed to configure v4 RSS
Port 0: 00:0C:29:D7:2F:BF
Checking link statuses...
Done
7. Set IO/MAC forwarding mode and start
testpmd> set fwd io
Set io packet forwarding mode
testpmd> start
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP
allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
testpmd>
8. send packet from tester port 0 to dut port 0 use scapy
>>>
>>> sendp([Ether(src="02:00:00:00:00:00",dst="00:0C:29:D7:2F:1A")/IP()/TCP()],iface="p4p1")
.
Sent 1 packets.
9. verify dut port 0 received and forwarding packets
testpmd> show port stats 0

######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-missed: 1 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0

Throughput (since last show)
Rx-pps: 0 Rx-bps: 0
Tx-pps: 0 Tx-bps: 0
############################################################################
testpmd>

Find the 1st bad point: 643fba77070571d69f3b6ea0b8d26bd50d5a3cff
commit 643fba77070571d69f3b6ea0b8d26bd50d5a3cff
Author: Eduard Serra <eserra at vmware.com>
Date:   Thu Apr 18 20:59:37 2019 +0000

    net/vmxnet3: add v4 boot and guest UDP RSS config

    This patch introduces:
    - VMxnet3 v4 negotiation and,
    - entirely guest-driven UDP RSS support.

    VMxnet3 v3 already has UDP RSS support, however it
    depends on hypervisor provisioning on the VM through
    ESX specific flags, which are not transparent or known
    to the guest later on.

    Vmxnet3 v4 introduces a new API transaction which allows
    configuring RSS entirely from the guest. This API must be
    invoked after device shared mem region is initialized.

    IPv4 ESP RSS (SPI based) is also available, but currently
    there are no ESP RSS definitions on rte_eth layer to
    handle that.

    Signed-off-by: Eduard Serra <eserra at vmware.com>
    Acked-by: Yong Wang <yongwang at vmware.com>

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the dev mailing list