[PATCH 0/5] net/mlx5: add BlueField socket direct support
Bing Zhao
bingz at nvidia.com
Wed Mar 4 08:26:57 CET 2026
Hi,
> -----Original Message-----
> From: Dariusz Sosnowski <dsosnowski at nvidia.com>
> Sent: Monday, March 2, 2026 7:35 PM
> To: Slava Ovsiienko <viacheslavo at nvidia.com>; Bing Zhao
> <bingz at nvidia.com>; Ori Kam <orika at nvidia.com>; Suanming Mou
> <suanmingm at nvidia.com>; Matan Azrad <matan at nvidia.com>
> Cc: dev at dpdk.org; Raslan Darawsheh <rasland at nvidia.com>
> Subject: [PATCH 0/5] net/mlx5: add BlueField socket direct support
>
> Goal of this patchset is to prepare probing logic in mlx5 networking PMD
> for support of BlueField DPUs with Socket Direct.
> In such use case, BlueField DPU will be connected through PCI to 2
> different CPUs on the host.
> Each host CPU sees 2 PFs.
> Each PF is connected to one of the physical ports.
>
> +--------+ +--------+
> |CPU 0 | |CPU 1 |
> | | | |
> | pf0 | | pf0 |
> | | | |
> | pf1 | | pf1 |
> | | | |
> +---+----+ +-+------+
> | |
> | |
> | |
> +----+ +-----+
> | |
> | |
> | |
> +---+-----------+----+
> |BF3 DPU |
> | |
> | pf0hpf pf1hpf |
> | |
> | pf2hpf pf3hpf |
> | |
> | p0 p1 |
> +------+------+------+
> | phy0 | | phy1 |
> +------+ +------+
>
>
> On BlueField DPU ARM Linux netdevs map to PFs/ports as follows:
>
> - p0 and p1 to physical ports 0 and 1 respectively,
> - pf0hpf and pf2hpf to CPU0 pf0 and CPU1 pf0 respectively,
> - pf1hpf and pf3hpf to CPU0 pf1 and CPU1 pf1 respectively.
>
> There are several possible ways to use such a setup:
>
> - Single E-Switch (embedded switch) per each CPU PF to
> physical port connection.
> - Shared E-Switch for related CPU PFs:
> - For example, both pf0hpf and pf2hpf are in the same E-Switch domain.
> - Multiport E-Switch.
> - All host PFs and physical ports are in the same E-Switch domain.
>
> When a DPDK application would be run on BlueField ARM it should be
> possible for application to probe all the relevant representors
> (corresponding to available netdevs).
> Using testpmd syntax users will be able to do the following:
>
> # Probe both physical ports
> port attach 03:00.0,dv_flow_en=2,representor=pf0-1
>
> # Probe both host PF 0 from CPU 0
> # (VF representor index -1 is special encoding for host PF)
> port attach 03:00.0,dv_flow_en=2,representor=pf0vf65535
> # or with explicit controller index
> port attach 03:00.0,dv_flow_en=2,representor=c1pf0vf65535
>
> # Probe both host PF 0 from CPU 1
> port attach 03:00.0,dv_flow_en=2,representor=pf2vf65535
> # or with explicit controller index
> port attach 03:00.0,dv_flow_en=2,representor=c2pf2vf65535
>
> Patches overview:
>
> - Patch 1 and 2 - Fixes bond detection logic.
> Previously mlx5 PMD relied on "bond" appearing in IB device name
> which is not always the case. Moved to sysfs checks for bonding devices.
> - Patch 3 - Add calculation of number of physical ports and host PFs.
> This information will be used to determine
> how DPDK port name is generated, instead of relying on
> specific setup type.
> - Patch 4 - Change "representor to IB port" matching logic to directly
> compare ethdev devargs values to IB port info.
> Added optional matching on controller index.
> - Patch 5 - Make DPDK port name generation dynamic and dependent on
> types/number of ports, instead of specific setup type.
> This allows more generic probing, independent of setup topology.
>
> Dariusz Sosnowski (5):
> common/mlx5: fix bond check
> net/mlx5: fix bond check
> net/mlx5: calculate number of uplinks and host PFs
> net/mlx5: compare representors explicitly
> net/mlx5: build port name dynamically
>
> drivers/common/mlx5/linux/mlx5_common_os.c | 86 ++++-
> drivers/common/mlx5/linux/mlx5_common_os.h | 9 +
> drivers/net/mlx5/linux/mlx5_os.c | 356 ++++++++++++++-------
> drivers/net/mlx5/mlx5.h | 2 +
> 4 files changed, 338 insertions(+), 115 deletions(-)
>
> --
> 2.47.3
Series Acked-by: Bing Zhao <bingz at nvidia.com>
More information about the dev
mailing list