[dpdk-users] Unable to see incoming packets with example KNI application
aber at semihalf.com
Wed May 11 15:44:50 CEST 2016
Those two lines look suspicious:
ifconfig vEth0_0 172.25.48.200
ifconfig vEth1_0 172.25.48.201
Those lines configure two interfaces in the same class B network.
Could it be the reason for the issues?
Also, do you see the correct ARP entries on both sides?
On Tue, May 10, 2016 at 7:58 PM, Pavey, Nicholas <npavey at akamai.com> wrote:
> Good afternoon,
> I’m a new user of the DPDK library - this is my first post to this list.
> I’ve been experimenting with the various example applications, including ‘skeleton’ and ‘kni’.
> I have been able to get ‘skeleton’ to work correctly, and observed good forwarding performance from it. I have been unable to get ‘kni’ to work however.
> The symptom appears to be that incoming packets are not received correctly and are not forwarded to the KNI stack. Likewise, it appears that outbound ICMP ‘ping’ packets sent via the traditional linux ‘ping’ command on the KNI interface are not sent (see observations, below).
> I’ve searched the mail archives and have seen several people with what sounds like similar problems. I don’t believe I saw a problem resolution there, however.
> Here’s a fair amount of detail about my environment. Please let me know if I’m missing something - I’ll happily provide extra details.
> Does anyone have any suggestions about what might be going on here?
> Nick Pavey
> Networking environment
> * DPDK machine has dual 10Gbps NICs
> - Intel 82599EB 10Gbps NIC
> * DPDK machine is directly connected to an Ixia load generator
> - SFP+ Direct Attached Copper (DAC) cabling
> - Running BreakingPoint load generation software
> * The linux kernel is unaware of the 10Gbps interfaces, so the DPDK can attach correctly
> * The control interface is an Intel I350 1Gbps copper NIC
> - This has a statically assigned IP address
> - I control the machine through this NIC
> DPDK versions
> I have tried the ‘kni’ application with two versions of the DPDK:
> * DPDK-16.04, unpacked from tarball
> * DPDK cloned from git. Head commit is db340cf2ef71af231af67be8e42fd603e4bab0ac
> - "i40e: fix VLAN stripping from inner header” by JingJing Wu, 5/4/2016
> Machine details
> Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz
> Dual socket, 8 core, hyperthreaded. 32 total execution contexts.
> 64GB RAM
> NICS are dual Intel 82599EB 10Gbps
> OS/compiler details
> The machine is running a modified version of the Kernel. IIRC, it’s derived from Ubuntu 12.04.
> The kernel version is reported as 3.14.43. Unfortunately I’m unable to detail all the modifications, but it’s probably fair to assume the kernel is roughly equivalent to 3.13.43.
> The compiler is GCC version 4.6.3.
> KNI application startup
> I have 384 (size chosen fairly arbitrarily) available huge pages, which seems to be enough to allow the ‘kni’ application to start up.
> I have built the ‘rte_kni’ kernel module on the DPDK target machine. I load it with no arguments, and it loads without complaint.
> I’m invoking the ‘kni’ application as follows:
> ./build/kni -c 0xf0 -n 4 -- -P -p 0x3 --config="(0,4,6,8),(1,5,7,9)"
> So, the ‘kni’ application is in Promiscuous mode.
> Once the application is started up, I set up two IP addresses for the KNI interfaces
> ifconfig vEth0_0 172.25.48.200
> ifconfig vEth1_0 172.25.48.201
> Both seem to initialize correctly, and the ‘kni’ application notices that the NIC status has changed.
> The ‘kni’ application seems to start up correctly. I’m not getting any errors, but equally, I’m not seeing traffic get routed into the Linux network stack as I’d expect.
> I’ve done several tests:
> * Sent a low level of traffic from the Ixia (a few packets a second)
> - The Ixia is directly connected, so the data is not going missing in the networking infrastructure
> - The ‘skeleton’ application works as expected, so the Ixia, the cabling and a good part of the DPDK appears to be working
> - A ‘tcpdump’ on the relevant network interfaces shows no incoming traffic
> - Watching the incoming packet rate with ‘sar -n DEV 1 1000’ shows the kernel seeing no incoming traffic on the DPDK interfaces
> - The statistics from the ‘kni’ application show no inbound data
> - A packet capture on the Ixia interface shows outbound packets but no return packets.
> * Started the ‘kni’ application and the Ixia, then attempted to ping the Ixia
> - In this case, the statistics from the ‘kni’ application do show the TX counters incrementing
> - However, when I examine the Ixia’s packet capture, I see no signs of ‘ping’ packets.
> Dmesg contents
> When I bring up the KNI interfaces, I get the following lines in the ‘dmesg’ log:
> [71051.312668] KNI: /dev/kni opened
> [71051.470926] KNI: Creating kni...
> [71051.471280] KNI: tx_phys: 0x00000000375313c0, tx_q addr: 0xffff8800
> [71051.471281] KNI: rx_phys: 0x000000003752f340, rx_q addr: 0xffff8800
> [71051.471282] KNI: alloc_phys: 0x000000003752d2c0, alloc_q addr: 0xffff8800
> [71051.471282] KNI: free_phys: 0x000000003752b240, free_q addr: 0xffff8800
> [71051.471284] KNI: req_phys: 0x00000000375291c0, req_q addr: 0xffff8800
> [71051.471285] KNI: resp_phys: 0x0000000037527140, resp_q addr: 0xffff8800
> [71051.471286] KNI: mbuf_phys: 0x000000083c27dec0, mbuf_kva: 0xffff8808
> [71051.471287] KNI: mbuf_va: 0x00007fe07ce7dec0
> [71051.471287] KNI: mbuf_size: 2048
> [71051.471306] KNI: pci_bus: 03:00:00
> [71051.506633] uio_pci_generic 0000:03:00.0: (PCI Express:5.0GT/s:Width x8)
> [71051.506981] a0:42:3f:29:b3:ae
> [71051.507710] uio_pci_generic 0000:03:00.0 (unregistered net_device): MAC: 2, P
> HY: 0, PBA No: FFFFFF-0FF
> [71051.508340] uio_pci_generic 0000:03:00.0 (unregistered net_device): Enabled F
> eatures: RxQ: 1 TxQ: 1
> [71051.510093] uio_pci_generic 0000:03:00.0 (unregistered net_device): Intel(R)
> 10 Gigabit Network Connection
> [71051.668942] KNI: Creating kni...
> [71051.669291] KNI: tx_phys: 0x0000000037522f40, tx_q addr: 0xffff880037522f40
> [71051.669292] KNI: rx_phys: 0x0000000037520ec0, rx_q addr: 0xffff880037520ec0
> [71051.669294] KNI: alloc_phys: 0x000000003751ee40, alloc_q addr: 0xffff88003751ee40
> [71051.669295] KNI: free_phys: 0x000000003751cdc0, free_q addr: 0xffff88003751cdc0
> [71051.669296] KNI: req_phys: 0x000000003751ad40, req_q addr: 0xffff88003751ad40
> [71051.669297] KNI: resp_phys: 0x0000000037518cc0, resp_q addr: 0xffff880037518cc0
> [71051.669297] KNI: mbuf_phys: 0x000000083c27dec0, mbuf_kva: 0xffff88083c27dec0
> [71051.669298] KNI: mbuf_va: 0x00007fe07ce7dec0
> [71051.669299] KNI: mbuf_size: 2048
> [71051.669306] KNI: pci_bus: 03:00:00
> [71051.669308] KNI: pci_bus: 03:00:01
> [71051.704749] uio_pci_generic 0000:03:00.1: (PCI Express:5.0GT/s:Width x8)
> [71051.705096] a0:42:3f:29:b3:af
> [71051.705824] uio_pci_generic 0000:03:00.1 (unregistered net_device): MAC: 2, PHY: 0, PBA No: FFFFFF-0FF
> [71051.706452] uio_pci_generic 0000:03:00.1 (unregistered net_device): Enabled Features: RxQ: 1 TxQ: 1
> [71051.708210] uio_pci_generic 0000:03:00.1 (unregistered net_device): Intel(R) 10 Gigabit Network Connection
> [74051.258432] KNI: Successfully release kni named vEth0_0
> [74054.324270] KNI: Successfully release kni named vEth1_0
> [74054.379908] KNI: /dev/kni closed
More information about the users