[dpdk-users] Mellanox + DPDK + Docker/Kubernetes

Cliff Burdick shaklee3 at gmail.com
Fri Feb 15 03:16:41 CET 2019


This is bare metal (PF). I actually traced down where it's failing in the
mellanox driver -- it's doing an ioctl with the name of the interface, and
that call fails since the devices aren't visible to the application (not in
/proc/net/dev). Although, the mellanox driver did successfully pull the
name from the ib driver. I think this is probably just not possible without
something like multus to support multiple devices in an unofficial way. I'm
assuming this would actually work with Intel NICs since they aren't visible
to the kernel as a regular net device at all, and thus wouldn't need ioctl.

On Thu, Feb 14, 2019, 18:11 Stephen Hemminger <stephen at networkplumber.org
wrote:

> On Thu, 14 Feb 2019 13:05:28 -0800
> Cliff Burdick <shaklee3 at gmail.com> wrote:
>
> > Hi, I'm trying to get DPDK working inside of a container deployed with
> > Kubernetes. It works great if I pass hostNetwork: true (effectively
> > net=host in Docker) to where the container sees all the host interfaces.
> > The problem with this is you lose all normal Kubernetes networking for
> > other non-DPDK interfaces by doing this. If I disable host networking, I
> > get the following error:
> >
> > EAL: Probing VFIO support...
> > EAL: PCI device 0000:01:00.0 on NUMA socket 0
> > EAL:   probe driver: 15b3:1013 net_mlx5
> > net_mlx5: port 0 cannot get MAC address, is mlx5_en loaded? (errno: No
> such
> > device)
> > net_mlx5: probe of PCI device 0000:01:00.0 aborted after encountering an
> > error: No such device
> > EAL: Requested device 0000:01:00.0 cannot be used
> >
> > I've tried mounting /sys and /dev in the container from the host, and it
> > still doesn't work. Is there something I can do to get the Mellanox mlx5
> > driver to work inside a container if it can't see the host interfaces?
>
> Is this Mellanox on bare-metal (ie PF).
> Or Mellanox VF as used in Hyper-V/Azure?
>
>


More information about the users mailing list