[dpdk-users] CX4 Lx + DPDK 20.02 + SLES15SP1

Lili Deng lilideng at outlook.com
Fri Mar 13 15:34:07 CET 2020


Hello,
​
I use latest dpdk + Mellanox CX4 on SLES 15 SP1, load module mlx5_ib.​
lspci​
8cdc:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev 80)​
d47a:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev 80) ​
 ​
Run below command, hit issue - Failed to query QP using DevX​
I upgrade rdma-core version (22.5) to a higher version (27.0) from their repository, no lucky.​
Could you give me some advice here?  Appreciate your help!​
 ​
./build/build/app/test-pmd/testpmd -l 0-1 -w d47a:00:02.0 -- --i​
EAL: Detected 8 lcore(s)​
EAL: Detected 1 NUMA nodes​
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket​
EAL: Selected IOVA mode 'PA'​
EAL: Probing VFIO support...​
EAL: PCI device d47a:00:02.0 on NUMA socket 0​
EAL:   probe driver: 15b3:1016 net_mlx5​
Interactive-mode selected​
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0​
testpmd: preferred mempool ops selected: ring_mp_mc​
 ​
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.​
 ​
Configuring Port 0 (socket 0)​
common_mlx5: mlx5_devx_cmds.c:581: mlx5_devx_cmd_qp_query_tis_td(): Failed to query QP using DevX​
net_mlx5: mlx5_txq.c:754: mlx5_txq_obj_new(): Fail to query port 0 Tx queue 0 QP TIS transport domain​
PANIC in mlx5_txq_obj_new():​
line 788        assert "(mlx5_glue->destroy_cq(tmpl.cq)) == 0" failed​
9: [./build/build/app/test-pmd/testpmd(_start+0x2a) [0x6c227a]]​
8: [/lib64/libc.so.6(__libc_start_main+0xea) [0x7f1d2abaf34a]]​
7: [./build/build/app/test-pmd/testpmd(main+0x612) [0x4ee982]]​
6: [./build/build/app/test-pmd/testpmd(start_port+0x3e9) [0x6c6519]]​
5: [./build/build/app/test-pmd/testpmd(rte_eth_dev_start+0x178) [0x7f27a8]]​
4: [./build/build/app/test-pmd/testpmd(mlx5_dev_start+0xed) [0xc1443d]]​
3: [./build/build/app/test-pmd/testpmd(mlx5_txq_obj_new+0xad1) [0xba2cd1]]​
2: [./build/build/app/test-pmd/testpmd(__rte_panic+0xb8) [0x4dec12]]​
1: [./build/build/app/test-pmd/testpmd(rte_dump_stack+0x16) [0x83d196]]​
Aborted (core dumped)​
​
ibv_devinfo​
hca_id: mlx5_0​​
        transport:                      InfiniBand (0)​​
        fw_ver:                         14.25.8100​​
        node_guid:                      000d:3aff:fef9:7f37​​
        sys_image_guid:                 0000:0000:0000:0000​​
        vendor_id:                      0x02c9​​
        vendor_part_id:                 4118​​
        hw_ver:                         0x80​​
        board_id:                       MSF0010110035​​
        phys_port_cnt:                  1​​
                port:   1​​
                        state:                  PORT_ACTIVE (4)​​
                        max_mtu:                4096 (5)​​
                        active_mtu:             1024 (3)​​
                        sm_lid:                 0​​
                        port_lid:               0​​
                        port_lmc:               0x00​​
                        link_layer:             Ethernet​​
​​
hca_id: mlx5_1​​
        transport:                      InfiniBand (0)​​
        fw_ver:                         14.25.8100​​
        node_guid:                      000d:3aff:fef9:9e5a​​
        sys_image_guid:                 0000:0000:0000:0000​​
        vendor_id:                      0x02c9​​
        vendor_part_id:                 4118​​
        hw_ver:                         0x80​​
        board_id:                       MSF0010110035​​
        phys_port_cnt:                  1​​
                port:   1​​
                        state:                  PORT_ACTIVE (4)​​
                        max_mtu:                4096 (5)​​
                        active_mtu:             1024 (3)​​
                        sm_lid:                 0​​
                        port_lid:               0​​
                        port_lmc:               0x00​​
                        link_layer:             Ethernet​​
​
Thanks,​
Lili


More information about the users mailing list