[dpdk-dev] Mellanox ConnectX-5 crashes and mbuf leak

Yongseok Koh yskoh at mellanox.com
Thu Oct 5 23:46:00 CEST 2017


Hi, Martin

Thanks for your thorough and valuable reporting. We could reproduce it. I found
a bug and fixed it. Please refer to the patch [1] I sent to the mailing list.
This might not be automatically applicable to v17.08 as I rebased it on top of
Nelio's flow cleanup patch. But as this is a simple patch, you can easily apply
it manually.

Thanks,
Yongseok

[1] http://dpdk.org/dev/patchwork/patch/29781

> On Sep 26, 2017, at 2:23 AM, Martin Weiser <martin.weiser at allegro-packets.com> wrote:
> 
> Hi,
> 
> we are currently testing the Mellanox ConnectX-5 100G NIC with DPDK
> 17.08 as well as dpdk-net-next and are
> experiencing mbuf leaks as well as crashes (and in some instances even
> kernel panics in a mlx5 module) under
> certain load conditions.
> 
> We initially saw these issues only in our own DPDK-based application and
> it took some effort to reproduce this
> in one of the DPDK example applications. However with the attached patch
> to the load-balancer example we can
> reproduce the issues reliably.
> 
> The patch may look weird at first but I will explain why I made these
> changes:
> 
> * the sleep introduced in the worker threads simulates heavy processing
> which causes the software rx rings to fill
>   up under load. If the rings are large enough (I increased the ring
> size with the load-balancer command line option
>   as you can see in the example call further down) the mbuf pool may run
> empty and I believe this leads to a malfunction
>   in the mlx5 driver. As soon as this happens the NIC will stop
> forwarding traffic, probably because the driver
>   cannot allocate mbufs for the packets received by the NIC.
> Unfortunately when this happens most of the mbufs will
>   never return to the mbuf pool so that even when the traffic stops the
> pool will remain almost empty and the
>   application will not forward traffic even at a very low rate.
> 
> * the use of the reference count in the mbuf in addition to the
> situation described above is what makes the
>   mlx5 DPDK driver crash almost immediately under load. In our
> application we rely on this feature to be able to forward
>   the packet quickly and still send the packet to a worker thread for
> analysis and finally free the packet when analysis is
>   done. Here I simulated this by increasing the mbuf reference count
> immediately after receiving the mbuf from the
>   driver and then calling rte_pktmbuf_free in the worker thread which
> should only decrement the reference count again
>   and not actually free the mbuf.
> 
> We executed the patched load-balancer application with the following
> command line:
> 
>     ./build/load_balancer -l 3-7 -n 4 -- --rx "(0,0,3),(1,0,3)" --tx
> "(0,3),(1,3)" --w "4" --lpm "16.0.0.0/8=>0; 48.0.0.0/8=>1;" --pos-lb 29
> --rsz "1024, 32768, 1024, 1024"
> 
> Then we generated traffic using the t-rex traffic generator and the sfr
> test case. On our machine the issues start
> to happen when the traffic exceeds ~6 Gbps but this may vary depending
> on how powerful the test machine is (by
> the way we were able to reproduce this on different types of hardware).
> 
> A typical stacktrace looks like this:
> 
>     Thread 1 "load_balancer" received signal SIGSEGV, Segmentation fault.
>     0x0000000000614475 in _mm_storeu_si128 (__B=..., __P=<optimized
> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716
>     716      __builtin_ia32_storedqu ((char *)__P, (__v16qi)__B);
>     (gdb) bt
>     #0  0x0000000000614475 in _mm_storeu_si128 (__B=..., __P=<optimized
> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716
>     #1  rxq_cq_decompress_v (elts=0x7fff3732bef0, cq=0x7ffff7f99380,
> rxq=0x7fff3732a980) at
> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:679
>     #2  rxq_burst_v (pkts_n=<optimized out>, pkts=0xa7c7b0 <app+432944>,
> rxq=0x7fff3732a980) at
> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1242
>     #3  mlx5_rx_burst_vec (dpdk_rxq=0x7fff3732a980, pkts=<optimized
> out>, pkts_n=<optimized out>) at
> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1277
>     #4  0x000000000043c11d in rte_eth_rx_burst (nb_pkts=3599,
> rx_pkts=0xa7c7b0 <app+432944>, queue_id=0, port_id=0 '\000')
>     at
> /root/dpdk-next-net//x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2781
>     #5  app_lcore_io_rx (lp=lp at entry=0xa7c700 <app+432768>,
> n_workers=n_workers at entry=1, bsz_rd=bsz_rd at entry=144,
> bsz_wr=bsz_wr at entry=144, pos_lb=pos_lb at entry=29 '\035')
>     at /root/dpdk-next-net/examples/load_balancer/runtime.c:198
>     #6  0x0000000000447dc0 in app_lcore_main_loop_io () at
> /root/dpdk-next-net/examples/load_balancer/runtime.c:485
>     #7  app_lcore_main_loop (arg=<optimized out>) at
> /root/dpdk-next-net/examples/load_balancer/runtime.c:669
>     #8  0x0000000000495e8b in rte_eal_mp_remote_launch ()
>     #9  0x0000000000441e0d in main (argc=<optimized out>,
> argv=<optimized out>) at
> /root/dpdk-next-net/examples/load_balancer/main.c:99
> 
> The crash does not always happen at the exact same spot but in our tests
> always in the same function.
> In a few instances instead of an application crash the system froze
> completely with what appeared to be a kernel
> panic. The last output looked like a crash in the interrupt handler of a
> mlx5 module but unfortunately I cannot
> provide the exact output right now.
> 
> All tests were performed under Ubuntu 16.04 server running a
> 4.4.0-96-generic kernel and the lasted Mellanox OFED
> MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64 was used.
> 
> Any help with this issue is greatly appreciated.
> 
> Best regards,
> Martin
> 
> <test.patch>



More information about the dev mailing list