[dpdk-users] PDUMP: failed to send to server:Connection refused

Sandeep Rayapudi rayapudisandeep at gmail.com
Thu Aug 25 18:01:03 CEST 2016


Hi all,

I'm trying the following scenario and PDUMP doesn't start up even though
I'm running traffic generator. My idea is to generate traffic from one host
and dump on another host.

1. Downloaded DPDK latest version on two hosts and compiled DPDK with
CONFIG_RTE_LIBRTE_PMD_PCAP=y
2. On both of these hosts, I made one of the NIC as DPDK enabled
3. On host 1, I did:
./app/app/x86_64-native-linuxapp-gcc/pktgen -c 0x1f -n 3 -- -P -m "[1:3].0"
The packet generator starts and prints:

   Copyright (c) <2010-2016>, Intel Corporation. All rights reserved.
   Pktgen created by: Keith Wiles -- >>> Powered by Intel® DPDK <<<

Lua 5.3.2  Copyright (C) 1994-2015 Lua.org, PUC-Rio
>>> Packet Burst 32, RX Desc 512, TX Desc 512, mbufs/port 4096, mbuf cache
512

=== port to lcore mapping table (# lcores 5) ===
   lcore:     0     1     2     3     4
port   0:  D: T  1: 0  0: 0  0: 1  0: 0 =  1: 1
Total   :  0: 0  1: 0  0: 0  0: 1  0: 0
    Display and Timer on lcore 0, rx:tx counts per port/lcore

Configuring 2 ports, MBUF Size 1920, MBUF Cache Size 512
Lcore:
    1, RX-Only
                RX( 1): ( 0: 0)
    3, TX-Only
                TX( 1): ( 0: 0)

Port :
    0, nb_lcores  2, private 0x8ac490, lcores:  1  3



** Dev Info (rte_ixgbe_pmd:0) **
   max_vfs        :   0 min_rx_bufsize    :1024 max_rx_pktlen : 15872
max_rx_queues         : 128 max_tx_queues:  64
   max_mac_addrs  : 127 max_hash_mac_addrs:4096 max_vmdq_pools:    64
   rx_offload_capa:  31 tx_offload_capa   :  63 reta_size     :   128
flow_type_rss_offloads:0000000000038d34
   vmdq_queue_base:   0 vmdq_queue_num    : 128 vmdq_pool_base:     0
** RX Conf **
   pthreash       :   8 hthresh          :   8 wthresh        :     0
   Free Thresh    :  32 Drop Enable      :   0 Deferred Start :     0
** TX Conf **
   pthreash       :  32 hthresh          :   0 wthresh        :     0
   Free Thresh    :  32 RS Thresh        :  32 Deferred Start :     0 TXQ
Flags:00000f01

Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 00:11:0a:67:d7:dc
    Create: Default RX  0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr
128)) + 192 =   8193 KB headroom 128 2176
      Set RX queue stats mapping pid 0, q 0, lcore 1


    Create: Default TX  0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr
128)) + 192 =   8193 KB headroom 128 2176
    Create: Range TX    0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr
128)) + 192 =   8193 KB headroom 128 2176
    Create: Sequence TX 0:0  - Memory used (MBUFs 4096 x (size 1920 + Hdr
128)) + 192 =   8193 KB headroom 128 2176
    Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 1920 + Hdr
128)) + 192 =    129 KB headroom 128 2176

                                                                       Port
memory used =  32897 KB
                                                                      Total
memory used =  32897 KB
Port  0: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>


=== Display processing on lcore 0
WARNING: Nothing to do on lcore 2: exiting
WARNING: Nothing to do on lcore 4: exiting
  RX processing lcore:   1 rx:  1 tx:  0
  TX processing lcore:   3 rx:  0 tx:  1






/ Ports 0-1 of 2   <Main Page>  Copyright (c) <2010-2016>, Intel Corporation
  Flags:Port      :   P--------------:0
Link State        :       <UP-10000-FD>     ----TotalRate----
Pkts/s Max/Rx     :                 0/0                   0/0
       Max/Tx     :                 0/0                   0/0
MBits/s Rx/Tx     :                 0/0                   0/0
Broadcast         :                   0
Multicast         :                   0
  64 Bytes        :                   0
  65-127          :                   0
  128-255         :                   0
  256-511         :                   0
  512-1023        :                   0
  1024-1518       :                   0
Runts/Jumbos      :                 0/0
Errors Rx/Tx      :                 0/0
Total Rx Pkts     :                   0
      Tx Pkts     :                   0
      Rx MBs      :                   0
      Tx MBs      :                   0
ARP/ICMP Pkts     :                 0/0
                  :
Pattern Type      :             abcd...
Tx Count/% Rate   :      Forever / 100%
PktSize/Tx Burst  :           64 /   32
Src/Dest Port     :         1234 / 5678
Pkt Type:VLAN ID  :     IPv4 / TCP:0001
Dst  IP Address   :         192.168.1.1
Src  IP Address   :      192.168.0.1/24
Dst MAC Address   :   00:00:00:00:00:00
Src MAC Address   :   00:11:0a:67:d7:dc
VendID/PCI Addr   :   8086:10fb/05:00.0

4. On host 2, I started pdump
./x86_64-native-linuxapp-gcc/app/dpdk-pdump --proc-type=secondary --
--pdump 'port=0,queue=*,rx-dev=/tmp/rx-file.pcap'

It gives following output:

EAL: Detected 56 lcore(s)
EAL: Probing VFIO support...
EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in the
kernel.
EAL:    This may cause issues with mapping memory into secondary processes
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
PMD: eth_ixgbe_dev_init(): No TX queues configured yet. Using default TX
function.
EAL: PCI device 0000:05:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
PMD: Initializing pmd_pcap for eth_pcap_rx_0
PMD: Creating pcap-backed ethdev on numa socket 0
Port 1 MAC: 00 00 00 01 02 03
PDUMP: failed to send to server:Connection refused,
pdump_create_client_socket:725
PDUMP: client request for pdump enable/disable failed
PDUMP: failed to send to server:Connection refused,
pdump_create_client_socket:725
PDUMP: client request for pdump enable/disable failed
EAL: Error - exiting with code: 1
  Cause: Unknown error -1


It looks like, it is trying to start pdump in both server and client mode
and neither worked and hence exited. Any idea what could be wrong?

Thanks,
Sandeep


More information about the users mailing list