DPDK with Mellanox ConnectX-5, complaining about mlx5_eth?

CJ Sculti cj at cj.gy
Wed Nov 13 22:43:33 CET 2024


I'm not using vfio, I just bound interfaces on there one time to test.
Shouldn't I be able to just use the default mlx5_core driver, without
binding to uio_pci_generic?


On Wed, Nov 13, 2024 at 4:26 PM Thomas Monjalon <thomas at monjalon.net> wrote:

> 13/11/2024 21:10, CJ Sculti:
> > I've been running my application for years on igb_uio with Intel NICs. I
> > recently replaced them with a Mellanox ConnectX-5 2x 40gbps NIC, updated
> > the DPDK version my application uses, and compiled with support for mlx5
> > PMDs. Both 40Gbps ports are up with link, and both are in Ethernet mode,
> > not Infiniband mode. However, I'm getting complaints when I start my
> > application about trying to load 'mlx5_eth'? Both are bound to mlx5_core
> > driver at the moment. When I bind them to vfio-pci, or uio_pci_generic,
> my
> > application fails to recognize them at all as valid DPDK devices. Anyone
> > have any ideas? Also, strange that it only complains about one? I have
> them
> > configured in a bond on the kernel, as my application requires that.
>
> You must not bind mlx5 devices with VFIO.
> I recommend reading documentation.
> You can start here:
> https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html#bifurcated-driver
> then
> https://doc.dpdk.org/guides/platform/mlx5.html#design
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/users/attachments/20241113/4a3ec2ee/attachment.htm>


More information about the users mailing list