[PATCH 0/4] Remove limitations coming from legacy VMDq
Stephen Hemminger
stephen at networkplumber.org
Sun Apr 5 20:47:20 CEST 2026
On Fri, 3 Apr 2026 11:18:31 +0200
David Marchand <david.marchand at redhat.com> wrote:
> Since the commit 88ac4396ad29 ("ethdev: add VMDq support"),
> VMDq has been imposing a maximum number of mac addresses in the
> mac_addr_add/del API.
>
> Nowadays, new Intel drivers do not support the feature and few other
> drivers implement this feature.
>
> This series proposes to flag drivers that support the feature, and
> remove the limit of number of mac addresses for others.
>
> Next step could be to remove the VMDq pool notion from the generic API.
> However I have some concern about this, as changing the quite stable
> mac_addr_add/del API now seems a lot of noise for not much benefit.
>
>
Make sense. The AI review found a couple of things.
Had to poke at it to make a good description
Subject: Re: [PATCH 1/4] ethdev: skip VMDq pools unless configured
Patches 1/4 and 3/4 look good to me.
Patch 2/4 has two issues:
1) In bnxt_reps.c, the new line:
dev_info->dev_capa = RTE_ETH_DEV_CAPA_VMDQ;
overwrites any default capabilities that were previously set before
the driver callback. The other drivers in this patch had the same
pre-existing pattern (plain assignment before &= ~FLOW_RULE_KEEP),
so for them it's no worse. But bnxt_reps.c previously only did the
&= ~ clear, so this is a new regression. Should be |= instead of =.
2) Several drivers that receive RTE_ETH_DEV_CAPA_VMDQ don't actually
support VMDq: e1000/em, bnxt representors, and i40e VF representors
have no max_vmdq_pools or VMDq configuration. Marking them as
VMDq-capable seems incorrect and would allow users to attempt VMDq
configuration on devices that can't handle it.
Patch 4/4 has a stack overflow:
iavf_add_del_all_mac_addr() allocates list_req on the stack with
addr[IAVF_UC_MACADDR_MAX]. At 32768 entries of ~8 bytes each,
that's roughly 256 KiB on the stack. This will blow the stack
on most configurations. Needs to be heap-allocated.
More information about the dev
mailing list