|SUCCESS| [4/4] net/iavf: accept up to 32k unicast MAC addresses

dpdklab at iol.unh.edu dpdklab at iol.unh.edu
Sat Apr 4 00:41:57 CEST 2026


Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/162853

_AI Code Review_

=== Patch Review: series.patch (via Claude) ===
Review date: 2026-04-03

Patch review for series.patch:

commit 20260403091836.1073484-2-david.marchand at redhat.com
Author: David Marchand <david.marchand at redhat.com>

    ethdev: skip VMDq pools unless configured

> @@ -1680,7 +1680,10 @@ eth_dev_mac_restore(struct rte_eth_dev *dev,
> 				continue;
> 
> 			pool = 0;
> -			pool_mask = dev->data->mac_pool_sel[i];
> +			if ((dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) != 0)
> +				pool_mask = dev->data->mac_pool_sel[i];
> +			else
> +				pool_mask = 1;

Does this handle the restore properly if the device was reconfigured
and VMDq mode changed? If VMDq was enabled, pools were associated, then
VMDq gets disabled and device restarted, does eth_dev_mac_restore()
correctly restore addresses with pool=0?

> @@ -5416,6 +5420,12 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
> 		RTE_ETHDEV_LOG_LINE(ERR, "Pool ID must be 0-%d", RTE_ETH_64_POOLS - 1);
> 		return -EINVAL;
> 	}
> +	vmdq = (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) != 0;
> +	if (!vmdq && pool != 0) {
> +		RTE_ETHDEV_LOG_LINE(ERR, "Port %u: VMDq is not configured (pool %d)",
> +			port_id, pool);
> +		return -EINVAL;
> +	}
> 
> 	index = eth_dev_get_mac_addr_index(port_id, addr);
> 	if (index < 0) {
> @@ -5425,7 +5435,7 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
> 				port_id);
> 			return -ENOSPC;
> 		}
> -	} else {
> +	} else if (vmdq) {
> 		pool_mask = dev->data->mac_pool_sel[index];
> 
> 		/* Check if both MAC address and pool is already there, and do nothing */
> @@ -5440,8 +5450,10 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
> 		/* Update address in NIC data structure */
> 		rte_ether_addr_copy(addr, &dev->data->mac_addrs[index]);
> 
> -		/* Update pool bitmap in NIC data structure */
> -		dev->data->mac_pool_sel[index] |= RTE_BIT64(pool);
> +		if (vmdq) {
> +			/* Update pool bitmap in NIC data structure */
> +			dev->data->mac_pool_sel[index] |= RTE_BIT64(pool);
> +		}

In the else block after mac_addr_index() returns >= 0, this code path
updates dev->data->mac_addrs[index] unconditionally but only updates
mac_pool_sel[index] when vmdq is true. Does this create inconsistent
state if the same MAC is added multiple times to pool 0 when VMDq is
disabled?

[ ... ]

commit 20260403091836.1073484-3-david.marchand at redhat.com
Author: David Marchand <david.marchand at redhat.com>

    ethdev: announce VMDq capability

> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index b677f9491d..0f783b9e98 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -1214,7 +1214,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
> 
> 	dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
> 	dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
> -			     RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
> +			     RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP |
> +			     RTE_ETH_DEV_CAPA_VMDQ;

[ ... ]

> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 9577b7d848..7ba539e796 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -158,6 +158,7 @@ static const struct {
> 	{RTE_ETH_DEV_CAPA_RXQ_SHARE, "RXQ_SHARE"},
> 	{RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP, "FLOW_RULE_KEEP"},
> 	{RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP, "FLOW_SHARED_OBJECT_KEEP"},
> +	{RTE_ETH_DEV_CAPA_VMDQ, "VMDQ"},
> };
> 
> enum {
> @@ -1581,6 +1582,22 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> 		goto rollback;
> 	}
> 
> +	if (!(dev_info.dev_capa & RTE_ETH_DEV_CAPA_VMDQ)) {
> +		if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) != 0) {
> +			RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support VMDq rx mode",
> +				port_id);
> +			ret = -EINVAL;
> +			goto rollback;
> +		}
> +		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
> +				dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY) {
> +			RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support VMDq tx mode",
> +				port_id);
> +			ret = -EINVAL;
> +			goto rollback;
> +		}
> +	}

Does this check prevent legitimate configurations on devices that do
not yet set RTE_ETH_DEV_CAPA_VMDQ but may still support VMDq
configurations? If any driver is missed in updating their capability
flags, this creates a silent regression.

Also, were all drivers that claim max_vmdq_pools > 0 in their dev_info
updated to set RTE_ETH_DEV_CAPA_VMDQ? The patch shows updates to bnxt,
e1000, fm10k, i40e, ixgbe, txgbe - does this cover all VMDq capable
drivers in the tree?

[ ... ]

commit 20260403091836.1073484-4-david.marchand at redhat.com
Author: David Marchand <david.marchand at redhat.com>

    ethdev: hide VMDq internal sizes

> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 1255cd6f2c..a4e9cf5b90 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -119,6 +119,12 @@ struct __rte_cache_aligned rte_eth_dev {
> struct rte_eth_dev_sriov;
> struct rte_eth_dev_owner;
> 
> +/* Definitions used for receive MAC address */
> +#define RTE_ETH_NUM_RECEIVE_MAC_ADDR   128 /**< Maximum nb. of receive mac addr. */
> +
> +/* Definitions used for unicast hash */
> +#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */

These comments don't align at 80 character line limit. Is the code
still using these macros via ethdev_driver.h after this move? If not,
why keep them in the driver API rather than making them internal to the
drivers that need them?

[ ... ]

commit 20260403091836.1073484-5-david.marchand at redhat.com
Author: David Marchand <david.marchand at redhat.com>

    net/iavf: accept up to 32k unicast MAC addresses

> diff --git a/drivers/net/intel/iavf/iavf.h b/drivers/net/intel/iavf/iavf.h
> index 403c61e2e8..f1dede0694 100644
> --- a/drivers/net/intel/iavf/iavf.h
> +++ b/drivers/net/intel/iavf/iavf.h
> @@ -31,7 +31,8 @@
> #define IAVF_IRQ_MAP_NUM_PER_BUF	 128
> #define IAVF_RXTX_QUEUE_CHUNKS_NUM	 2
> 
> -#define IAVF_NUM_MACADDR_MAX      64
> +#define IAVF_UC_MACADDR_MAX      32768
> +#define IAVF_MC_MACADDR_MAX      64

This increases IAVF_UC_MACADDR_MAX from 64 to 32768. Does this cause any
memory allocation concerns? In iavf_dev_init():

> @@ -2822,11 +2822,11 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
> 
> 	/* copy mac addr */
> 	eth_dev->data->mac_addrs = rte_zmalloc(
> -		"iavf_mac", RTE_ETHER_ADDR_LEN * IAVF_NUM_MACADDR_MAX, 0);
> +		"iavf_mac", RTE_ETHER_ADDR_LEN * IAVF_UC_MACADDR_MAX, 0);

This allocates 32768 * 6 = 196608 bytes per device. Does this require a
release note mention about increased memory usage? The commit message
mentions the hardware provides 32k lookups but is shared across VFs -
should the allocation be dynamic based on actual capability query rather
than statically allocating 196KB per VF?


More information about the test-report mailing list