Unable to start mlx5 PMD on NUMA node 1
Erez Ferber
erezferber at gmail.com
Tue Jun 25 16:29:13 CEST 2024
Make sure you're running with the below fix which should've been merged in
v21.11.1 and up.
https://mails.dpdk.org/archives/stable/2022-February/036254.html
Regards,
Erez
On Tue, 25 Jun 2024 at 12:52, Tomas Jansky <Tomas.Jansky at progress.com>
wrote:
> Hello,
>
> I am experiencing issues with DPDK (21.11) mlx5 PMD driver when allocating
> hugepages on NUMA node 1.
>
> Card info:
> Ethernet controller: Mellanox Technologies MT2894 Family [ConnectX-6 Lx]
> Subsystem: Mellanox Technologies Device 0020
> NUMA node: 1
> Driver: mlx5_core
> Version: 5.7-1.0.2
> Firmware-version: 26.36.1010 (DEL0000000031)
>
> The card is clearly linked to NUMA node 1, so I assigned some hugepages
> only to the NUMA node 1.
> cat
> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
> 0
> cat
> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
> 4
>
> However, when I run my DPDK application, it presents with the following
> output and fails:
> EAL: Detected CPU lcores: 24
> EAL: Detected NUMA nodes: 2
> EAL: Detected shared linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/0000:98:00.0/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: No free 1048576 kB hugepages reported on node 0
> EAL: No free 2048 kB hugepages reported on node 0
> EAL: No free 2048 kB hugepages reported on node 1
> EAL: No available 2048 kB hugepages reported
> EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:98:00.0 (socket 1)
> mlx5_net: Failed to create ASO bits mem for MR.
> EAL: Error: Invalid memory
> mlx5_net: probe of PCI device 0000:98:00.0 aborted after encountering an
> error: Operation not permitted
> mlx5_common: Failed to load driver mlx5_eth
> EAL: Requested device 0000:98:00.0 cannot be used
> EAL: Bus (pci) probe failed.
>
> If I allocate hugepages only for NUMA node 0, it fails with:
> mlx5_common: Failed to initialize global MR share cache.
>
> So the only working solution for me currently is to have hugepages
> allocated for both NUMA nodes, which is weird, considering that e.g., i40e
> PMD works completely fine with having hugepages only on a single NUMA node.
>
> Is there a way to force the mlx5 PMD to allocate everything on a single
> specific NUMA node?
>
> Any advice is appreciated
> Tomas
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/users/attachments/20240625/67c517a7/attachment-0001.htm>
More information about the users
mailing list