[dpdk-dev] [RFC] net/ionic: update MTU calculations

Ferruh Yigit ferruh.yigit at intel.com
Tue Dec 15 13:26:41 CET 2020


On 12/10/2020 2:46 AM, Andrew Boyer wrote:
> This RFC is in response to the threads about testpmd mtu settings
> and the deprecation of max-rx-pkt-len.
> 
> It took us a while to figure out what we were supposed to be doing
> in this part of the API. It is not clear if max_rx_pkt_len should be
> an input to or an output from the PMD.

'max_rx_pkt_len' is input to the PMD, but it needs to be in sync with MTU value, 
that is why PMDs update this value when MTU is updated to keep the sync.

> 
> The code below represents what we believe should happen today, and also
> happens to pass the DTS tests which were failing prior to this change
> (checksum and jumbo_frame at least).
> 
> * dev_mtu_set() argument 'mtu' is the usable frame size

'MTU' is the size of frame offload, 'max_rx_pkt_len' is the frame size, so the 
relation is:
frame_size = mtu + ethder_hdr + CRC [ + VLAN] [ + VLAN ]

Overall the size of the frame supported, for same payload, changes based on if 
PMD supports VLAN or QinQ.

> * store that 'mtu' plus RTE_ETHER_HDR_LEN in max_rx_pkt_len

This seems wrong, relation is as above.

> * calculate device rx buffer size as max_rx_pkt_len - 4 (for CRC)
> 

The Rx buffer size and the Rx packet lenght doesn't have to be related, based on 
what your hardware supports.

'max_rx_pkt_len' is the value to limit what HW (PHY) accepts. Lets say
'max_rx_pkt_len' is 9000, so a packet with 8000bytes will be accepted and it can 
be written to 8 Rx buffers as 8x1000 if HW supports 'DEV_RX_OFFLOAD_SCATTER'.

Rx buffer size mostly limited with your buffer size, which is mbuf buffer size.
If your HW doesn't support 'DEV_RX_OFFLOAD_SCATTER', you should add extra checks 
to be sure 'max_rx_pkt_len' is not bigger than the Rx buffer size.

> We do not set any mtu-related values in the dev_configure() step.
> At startup time, by default, the device has a 9K mtu but testpmd et al.
> believe it is set to 1500. It sounds like this will be fixed by the coming
> cleanup, which will be welcomed!

The 'max_rx_pkt_len' is a user configuration, PMDs should take it into account. 
It doesn't have to be in 'dev_configure()', it can be taken into account on 
'start()', but if application requests 1500 bytes, PMD should configure according.

In testpmd there will be some mtu->'frame size' calculation fix, but the default 
mtu still will be 1500, I am not sure what kind of fix you are expecting.

> 
> Please let me know if any of this is incorrect.
> 
> Thank you,
> Andrew
> 
> Signed-off-by: Andrew Boyer <aboyer at pensando.io>
> ---
>   drivers/net/ionic/ionic_ethdev.c | 33 ++++++++++++++------------------
>   drivers/net/ionic/ionic_lif.c    | 15 +++++++++++++++
>   drivers/net/ionic/ionic_lif.h    |  2 ++
>   drivers/net/ionic/ionic_rxtx.c   | 15 +++++----------
>   4 files changed, 36 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
> index 925da3e29..7000de3f9 100644
> --- a/drivers/net/ionic/ionic_ethdev.c
> +++ b/drivers/net/ionic/ionic_ethdev.c
> @@ -357,25 +357,19 @@ ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
>   	int err;
>   
>   	IONIC_PRINT_CALL();
> +	IONIC_PRINT(INFO, "Setting mtu %u\n", mtu);
>   
> -	/*
> -	 * Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
> -	 * is done by the the API.
> -	 */
> -
> -	/*
> -	 * Max frame size is MTU + Ethernet header + VLAN + QinQ
> -	 * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
> -	 */
> -	max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
> -
> -	if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
> -		return -EINVAL;

The max frame size calculation depends on what HW support, but if VLAN header is 
supported above calculation and check is correct.

> -
> +	/* Note: mtu check against min/max is done by the API */
>   	err = ionic_lif_change_mtu(lif, mtu);
>   	if (err)
>   		return err;
>   
> +	/* Update max frame size */
> +	max_frame_size = mtu + RTE_ETHER_HDR_LEN;

I guess you are removing the CRC because your HW strips the CRC in the Rx 
buffer, but this limit is not for Rx buffer, it is for the frame size HW 
accepts, and since recevied frame will have the CRC it should be included into 
the calculation.

> +	eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame_size;
> +
> +	ionic_lif_set_rx_buf_size(lif);
> +
>   	return 0;
>   }
>   
> @@ -394,11 +388,12 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
>   	dev_info->max_tx_queues = (uint16_t)
>   		ident->lif.eth.config.queue_count[IONIC_QTYPE_TXQ];
>   	/* Also add ETHER_CRC_LEN if the adapter is able to keep CRC */
> -	dev_info->min_rx_bufsize = IONIC_MIN_MTU + RTE_ETHER_HDR_LEN;
> -	dev_info->max_rx_pktlen = IONIC_MAX_MTU + RTE_ETHER_HDR_LEN;
> -	dev_info->max_mac_addrs = adapter->max_mac_addrs;
> -	dev_info->min_mtu = IONIC_MIN_MTU;
> -	dev_info->max_mtu = IONIC_MAX_MTU;
> +	dev_info->min_mtu = RTE_MAX((uint32_t)IONIC_MIN_MTU,
> +				ident->lif.eth.min_frame_size);
> +	dev_info->max_mtu = RTE_MIN((uint32_t)IONIC_MAX_MTU,
> +				ident->lif.eth.max_frame_size);
> +	dev_info->min_rx_bufsize = dev_info->min_mtu + RTE_ETHER_HDR_LEN;
> +	dev_info->max_rx_pktlen = dev_info->max_mtu + RTE_ETHER_HDR_LEN;
>   
>   	dev_info->hash_key_size = IONIC_RSS_HASH_KEY_SIZE;
>   	dev_info->reta_size = ident->lif.eth.rss_ind_tbl_sz;
> diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
> index e4816a22c..efcaf3c82 100644
> --- a/drivers/net/ionic/ionic_lif.c
> +++ b/drivers/net/ionic/ionic_lif.c
> @@ -724,6 +724,21 @@ ionic_qcq_free(struct ionic_qcq *qcq)
>   	rte_free(qcq);
>   }
>   
> +void
> +ionic_lif_set_rx_buf_size(struct ionic_lif *lif)
> +{
> +	struct rte_eth_conf *dev_conf = &lif->eth_dev->data->dev_conf;
> +
> +	/*
> +	 * Adjust the size of the LIF's rx buffers based on the
> +	 * current ethdev config.
> +	 * NB: Our buffers are smaller since they do not include the CRC.
> +	 */
> +	lif->rx_buf_size = dev_conf->rxmode.max_rx_pkt_len - RTE_ETHER_CRC_LEN;

Why is this 'rx_buf_size' is used for? Isn't mbuf buffer used as Rx buffer?

> +	IONIC_PRINT(DEBUG, "max_rx_pkt_len %u -> rx_buf_size %u\n",
> +		dev_conf->rxmode.max_rx_pkt_len, lif->rx_buf_size);
> +}
> +
>   int
>   ionic_rx_qcq_alloc(struct ionic_lif *lif, uint32_t index, uint16_t nrxq_descs,
>   		struct ionic_qcq **qcq)
> diff --git a/drivers/net/ionic/ionic_lif.h b/drivers/net/ionic/ionic_lif.h
> index 8e2b42443..37957bc02 100644
> --- a/drivers/net/ionic/ionic_lif.h
> +++ b/drivers/net/ionic/ionic_lif.h
> @@ -84,6 +84,7 @@ struct ionic_lif {
>   	struct ionic_adapter *adapter;
>   	struct rte_eth_dev *eth_dev;
>   	uint16_t port_id;  /**< Device port identifier */
> +	uint16_t rx_buf_size;
>   	uint32_t index;
>   	uint32_t hw_index;
>   	uint32_t state;
> @@ -129,6 +130,7 @@ int ionic_lif_start(struct ionic_lif *lif);
>   int ionic_lif_stop(struct ionic_lif *lif);
>   
>   int ionic_lif_configure(struct ionic_lif *lif);
> +void ionic_lif_set_rx_buf_size(struct ionic_lif *lif);
>   void ionic_lif_reset(struct ionic_lif *lif);
>   
>   int ionic_intr_alloc(struct ionic_lif *lif, struct ionic_intr_info *intr);
> diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
> index b689c8381..cf5875740 100644
> --- a/drivers/net/ionic/ionic_rxtx.c
> +++ b/drivers/net/ionic/ionic_rxtx.c
> @@ -721,8 +721,6 @@ ionic_rx_clean(struct ionic_queue *q,
>   	struct rte_mbuf *rxm = cb_arg;
>   	struct rte_mbuf *rxm_seg;
>   	struct ionic_qcq *rxq = IONIC_Q_TO_QCQ(q);
> -	uint32_t max_frame_size =
> -		rxq->lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
>   	uint64_t pkt_flags = 0;
>   	uint32_t pkt_type;
>   	struct ionic_rx_stats *stats = IONIC_Q_TO_RX_STATS(q);
> @@ -756,8 +754,8 @@ ionic_rx_clean(struct ionic_queue *q,
>   		return;
>   	}
>   
> -	if (cq_desc->len > max_frame_size ||
> -			cq_desc->len == 0) {
> +	if (unlikely(cq_desc->len > rxq->lif->rx_buf_size ||
> +			cq_desc->len == 0)) {
>   		stats->bad_len++;
>   		ionic_rx_recycle(q, q_desc_index, rxm);
>   		return;
> @@ -952,21 +950,20 @@ ionic_rx_fill(struct ionic_qcq *rxq, uint32_t len)
>   int __rte_cold
>   ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
>   {
> -	uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
>   	struct ionic_qcq *rxq;
>   	int err;
>   
>   	rxq = eth_dev->data->rx_queues[rx_queue_id];
>   
>   	IONIC_PRINT(DEBUG, "Starting RX queue %u, %u descs (size: %u)",
> -		rx_queue_id, rxq->q.num_descs, frame_size);
> +		rx_queue_id, rxq->q.num_descs, rxq->lif->rx_buf_size);
>   
>   	err = ionic_lif_rxq_init(rxq);
>   	if (err)
>   		return err;
>   
>   	/* Allocate buffers for descriptor rings */
> -	if (ionic_rx_fill(rxq, frame_size) != 0) {
> +	if (ionic_rx_fill(rxq, rxq->lif->rx_buf_size) != 0) {
>   		IONIC_PRINT(ERR, "Could not alloc mbuf for queue:%d",
>   			rx_queue_id);
>   		return -1;
> @@ -1062,8 +1059,6 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>   		uint16_t nb_pkts)
>   {
>   	struct ionic_qcq *rxq = (struct ionic_qcq *)rx_queue;
> -	uint32_t frame_size =
> -		rxq->lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
>   	struct ionic_cq *cq = &rxq->cq;
>   	struct ionic_rx_service service_cb_arg;
>   
> @@ -1073,7 +1068,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>   
>   	ionic_rxq_service(cq, nb_pkts, &service_cb_arg);
>   
> -	ionic_rx_fill(rxq, frame_size);
> +	ionic_rx_fill(rxq, rxq->lif->rx_buf_size);
>   
>   	return service_cb_arg.nb_rx;
>   }
> 



More information about the dev mailing list