[PATCH 0/4] Remove limitations coming from legacy VMDq
David Marchand
david.marchand at redhat.com
Wed Apr 29 16:22:18 CEST 2026
On Sun, 5 Apr 2026 at 20:47, Stephen Hemminger
<stephen at networkplumber.org> wrote:
>
> On Fri, 3 Apr 2026 11:18:31 +0200
> David Marchand <david.marchand at redhat.com> wrote:
>
> > Since the commit 88ac4396ad29 ("ethdev: add VMDq support"),
> > VMDq has been imposing a maximum number of mac addresses in the
> > mac_addr_add/del API.
> >
> > Nowadays, new Intel drivers do not support the feature and few other
> > drivers implement this feature.
> >
> > This series proposes to flag drivers that support the feature, and
> > remove the limit of number of mac addresses for others.
> >
> > Next step could be to remove the VMDq pool notion from the generic API.
> > However I have some concern about this, as changing the quite stable
> > mac_addr_add/del API now seems a lot of noise for not much benefit.
> >
> >
>
> Make sense. The AI review found a couple of things.
> Had to poke at it to make a good description
>
> Subject: Re: [PATCH 1/4] ethdev: skip VMDq pools unless configured
>
> Patches 1/4 and 3/4 look good to me.
>
> Patch 2/4 has two issues:
>
> 1) In bnxt_reps.c, the new line:
>
> dev_info->dev_capa = RTE_ETH_DEV_CAPA_VMDQ;
>
> overwrites any default capabilities that were previously set before
> the driver callback. The other drivers in this patch had the same
> pre-existing pattern (plain assignment before &= ~FLOW_RULE_KEEP),
> so for them it's no worse. But bnxt_reps.c previously only did the
> &= ~ clear, so this is a new regression. Should be |= instead of =.
The explicit &= ~FLOW_RULE_KEEP was added as a "documentation" hint
for maintainers.
2fe6f1b76279 ("drivers/net: advertise no support for keeping flow rules")
Though I don't see much activity on the topic, so this hint probably
missed the target.
In any case, dev_capa is set to 0 from ethdev before calling the
driver op, so nothing is broken with an explicit =.
>
> 2) Several drivers that receive RTE_ETH_DEV_CAPA_VMDQ don't actually
> support VMDq: e1000/em, bnxt representors, and i40e VF representors
> have no max_vmdq_pools or VMDq configuration. Marking them as
> VMDq-capable seems incorrect and would allow users to attempt VMDq
> configuration on devices that can't handle it.
This one is interesting and it seems to make sense, I'll double check.
>
> Patch 4/4 has a stack overflow:
>
> iavf_add_del_all_mac_addr() allocates list_req on the stack with
> addr[IAVF_UC_MACADDR_MAX]. At 32768 entries of ~8 bytes each,
> that's roughly 256 KiB on the stack. This will blow the stack
> on most configurations. Needs to be heap-allocated.
The heap allocations were removed recently, I'll see how I can rework.
--
David Marchand
More information about the dev
mailing list