[dpdk-dev] Facing issue with mellanox after increasing number of buffers

chetan bhasin chetan.bhasin017 at gmail.com
Fri Feb 21 05:30:24 CET 2020


Hi,

We are using DPDK underneath VPP . We are facing issue when we increase
buffers from 100k to 300k after upgrading vpp version (18.01--> 19.08).
As per log following error is seen
net_mlx5: port %u unable to find virtually contiguous chunk for address
(%p). rte_memseg_contig_walk() failed.\n%.0s", ap=ap at entry=0x7f3379c4fac8

Vpp 18.01 uses (dpdk version 17.11.4)
Vpp 19.08 uses (dpdk version 19.05)
With vpp 20.01 (uses dpdk version 19.08) no issue see till 400k buffers.


*Back trace looks like : *
format=0x7f3376768df8 "net_mlx5: port %u unable to find virtually
contiguous chunk for address (%p). rte_memseg_contig_walk() failed.\n%.0s",
ap=ap at entry=0x7f3379c4fac8)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:427
#6  0x00007f3375ab2c12 in rte_log (level=level at entry=5, logtype=<optimized
out>,
    format=format at entry=0x7f3376768df8 "net_mlx5: port %u unable to find
virtually contiguous chunk for address (%p). rte_memseg_contig_walk()
failed.\n%.0s")
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:443
#7  0x00007f3375dc47fa in mlx5_mr_create_primary (dev=dev at entry=0x7f3376e9d940
<rte_eth_devices>,
    entry=entry at entry=0x7ef5c00d02ca, addr=addr at entry=69384463936)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:627
#8  0x00007f3375abe238 in mlx5_mr_create (addr=69384463936,
entry=0x7ef5c00d02ca, dev=0x7f3376e9d940 <rte_eth_devices>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:833
#9  mlx5_mr_lookup_dev (dev=0x7f3376e9d940 <rte_eth_devices>,
mr_ctrl=mr_ctrl at entry=0x7ef5c00d022e, entry=0x7ef5c00d02ca,
    addr=69384463936)


*Crash back-trace looks like :*
#0  mlx5_tx_complete (txq=<optimized out>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_rxtx.h:588
#1  mlx5_tx_burst (dpdk_txq=<optimized out>, pkts=0x7fc85686c000, pkts_n=1)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_rxtx.c:563
#2  0x00007fc852d1912e in rte_eth_tx_burst (nb_pkts=1,
tx_pkts=0x7fc85686c000, queue_id=0, port_id=<optimized out>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/build-root/install-vpp-native/external/include/dpdk/rte_ethdev.h:4309
#3  tx_burst_vector_internal (n_left=1, mb=0x7fc85686c000,
xd=0x7fc8568749c0, vm=0x7fc856803800)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/plugins/dpdk/device/device.c:179
#4  dpdk_device_class_tx_fn (vm=0x7fc856803800, node=<optimized out>,
f=<optimized out>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/plugins/dpdk/device/device.c:376
#5  0x00007fc9585fe0da in dispatch_node (last_time_stamp=<optimized out>,
frame=0x7fc85637c780,
    dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INTERNAL,
node=0x7fc85697f440, vm=0x7fc856803800)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/vlib/main.c:1255
#6  dispatch_pending_node (vm=vm at entry=0x7fc856803800,
pending_frame_index=pending_frame_index at entry=6,
    last_time_stamp=<optimized out>)
    at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vpp_1908/src/vlib/main.c:1430


Thanks,
Chetan


More information about the dev mailing list