[dpdk-dev] DPDK PDUMP Issue

Dikshant Chitkara dchitkara at Airspan.com
Tue Jul 28 19:08:58 CEST 2020


Hi Stephen,

If that was the case, then how come pdump worked with testpmd?

See below testpmd with pdumg log:

Testpmd:

[root at flexran3 x86_64-native-linux-icc]# ./app/testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:41:00.0 on NUMA socket 0
EAL:   probe driver: 8086:37d2 net_i40e
EAL: PCI device 0000:41:00.1 on NUMA socket 0
EAL:   probe driver: 8086:37d2 net_i40e
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 8086:158b net_i40e
EAL: PCI device 0000:88:00.1 on NUMA socket 1
EAL:   probe driver: 8086:158b net_i40e
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
Port 0: 3C:FD:FE:CD:34:A4
Checking link statuses...
Done
testpmd>
Port 0: link state change event

testpmd>
testpmd>
testpmd> start tx_first
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 5 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=8 hthresh=8  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
Invalid RX queue_id=0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 32             TX-dropped: 0             TX-total: 32
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 32             TX-dropped: 0             TX-total: 32
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.


PDUMP:

[root at flexran3 x86_64-native-linux-icc]# ./app/dpdk-pdump -d librte_pmd_i40e.so -d librte_pmd_pcap.so -- --pdump 'port=0,queue=*,tx-dev=/home/dchitkara/capture.pcap'
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_120725_2d88411f7f7d4
EAL: Probing VFIO support...
EAL: PCI device 0000:41:00.0 on NUMA socket 0
EAL:   probe driver: 8086:37d2 net_i40e
EAL: PCI device 0000:41:00.1 on NUMA socket 0
EAL:   probe driver: 8086:37d2 net_i40e
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 8086:158b net_i40e
EAL: PCI device 0000:88:00.1 on NUMA socket 1
EAL:   probe driver: 8086:158b net_i40e
Port 1 MAC: 02 70 63 61 70 00
 core (0), capture for (1) tuples
 - port 0 device ((null)) queue 65535
^C

Signal 2 received, preparing to exit...
##### PDUMP DEBUG STATS #####
 -packets dequeued:                     32
 -packets transmitted to vdev:          32
 -packets freed:                        0
[root at flexran3 x86_64-native-linux-icc]



-----Original Message-----
From: Stephen Hemminger <stephen at networkplumber.org> 
Sent: 28 July 2020 22:34
To: Dikshant Chitkara <dchitkara at Airspan.com>
Cc: Varghese, Vipin <vipin.varghese at intel.com>; users at dpdk.org; dev at dpdk.org; Amir Ilan <ailan at Airspan.com>; Veeresh Patil <vpatil at Airspan.com>
Subject: Re: [dpdk-dev] DPDK PDUMP Issue

NOT FROM AIRSPAN - Caution - External from: stephen at networkplumber.org On Tue, 28 Jul 2020 16:41:38 +0000 Dikshant Chitkara <dchitkara at Airspan.com> wrote:

> Hi Stephen,
> Our system has 2 sockets as seen from below :
> 
> [root at flexran3 dchitkara]# lscpu
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                80
> On-line CPU(s) list:   0-79
> Thread(s) per core:    2
> Core(s) per socket:    20
> Socket(s):             2
> NUMA node(s):          2

Did you configure hugepages on both NUMA nodes?
You might be able to get away with only doing one node that has the device attached but probably both need to have dedicated hugepages.


More information about the dev mailing list