[dpdk-dev] ConnectX-4/mlx5 crashes around rxq_cqe_comp_en?
Yasuhiro Ohara
yasu at nttv6.jp
Fri Jul 12 18:38:53 CEST 2019
Hi,
I get a crash when I put a significant amount of load on ConnectX-4/mlx5,
i.e., 50Gbps for 100GbE port.
Thread 22 "lcore-slave-19" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe77ee700 (LWP 33519)]
0x0000555555f010a3 in _mm_storeu_si128 (__B=..., __P=0x10)
at /usr/lib/gcc/x86_64-linux-gnu/7/include/emmintrin.h:721
721 *__P = __B;
(gdb) bt
#0 0x0000555555f010a3 in _mm_storeu_si128 (__B=..., __P=0x10)
at /usr/lib/gcc/x86_64-linux-gnu/7/include/emmintrin.h:721
#1 rxq_cq_decompress_v (rxq=0x22c910ccc0, cq=0x22c8fd1800, elts=0x22c910d240)
at /usr/local/dpdk-stable-18.11.2/drivers/net/mlx5/mlx5_rxtx_vec_sse.h:421
#2 0x0000555555f04b42 in rxq_burst_v (rxq=0x22c910ccc0, pkts=0x7fffe77eba40,
pkts_n=32, err=0x7fffe77dc978)
at /usr/local/dpdk-stable-18.11.2/drivers/net/mlx5/mlx5_rxtx_vec_sse.h:956
#3 0x0000555555f055ea in mlx5_rx_burst_vec (dpdk_rxq=0x22c910ccc0,
pkts=0x7fffe77eba40, pkts_n=32)
at /usr/local/dpdk-stable-18.11.2/drivers/net/mlx5/mlx5_rxtx_vec.c:238
#4 0x0000555555632772 in rte_eth_rx_burst (port_id=4, queue_id=5,
rx_pkts=0x7fffe77eba40, nb_pkts=32)
at /usr/local/dpdk-18.11/x86_64-native-linuxapp-gcc/include/rte_ethdev.h:3879
My environments are:
Ubuntu 18.04.2 LTS 4.15.0-50-generic
MLNX_OFED_LINUX-4.5-1.0.1.0-ubuntu18.04-x86_64
fw_ver: 12.17.2020
vendor_id: 0x02c9
vendor_part_id: 4115
hw_ver: 0x0
board_id: LNR3270110033
DPDK 18.11.2
It looks like the CQE compression is the crashing place.
dpdk-stable-18.11.2/drivers/net/mlx5/mlx5_rxtx_vec_sse.h:956
953 /* Decompress the last CQE if compressed. */
954 if (comp_idx < MLX5_VPMD_DESCS_PER_LOOP && comp_idx == n) {
955 assert(comp_idx == (nocmp_n % MLX5_VPMD_DESCS_PER_LOOP));
956 rxq_cq_decompress_v(rxq, &cq[nocmp_n], &elts[nocmp_n]);
And I'm wondering how I can disable rxq_cqe_comp_en devargs.
<https://doc.dpdk.org/guides-18.02/nics/mlx5.html>
22.5.3. Run-time configuration
rxq_cqe_comp_en parameter [int]
Any information or guesses are appreciated.
Best regards,
Yasu
More information about the dev
mailing list