[dpdk-dev] [PATCH v3 1/9] ethdev: introduce Rx buffer split

Slava Ovsiienko viacheslavo at nvidia.com
Mon Oct 12 22:22:26 CEST 2020


Hi, Andrew

You are right - the code duplication of rte_eth_rx_queue_setup() code was large
and it did not look well indeed.

I've updated the code, now rte_eth_rx_queue_setup() and rte_eth_rxseg_queue_setup()
share the underlying internal routine __rte_eth_rx_queue_setup().

Of course, there is some refactoring, but it is merely straightforward, and I hope you
will find it acceptable, please see the v4 of the patchset.

As I said, I do not see the decision-making con or pro for the case.
Anyway, if we decide to move the segment descriptions to the config struct - there is
just small step remaining over existing code to implement you that approach.

With best regards, Slava

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko at oktetlabs.ru>
> Sent: Monday, October 12, 2020 20:11
> To: NBU-Contact-Thomas Monjalon <thomas at monjalon.net>; Slava
> Ovsiienko <viacheslavo at nvidia.com>
> Cc: dev at dpdk.org; stephen at networkplumber.org; ferruh.yigit at intel.com;
> olivier.matz at 6wind.com; jerinjacobk at gmail.com;
> maxime.coquelin at redhat.com; david.marchand at redhat.com
> Subject: Re: [dpdk-dev] [PATCH v3 1/9] ethdev: introduce Rx buffer split
> 
> On 10/12/20 8:03 PM, Thomas Monjalon wrote:
> > 12/10/2020 18:38, Andrew Rybchenko:
> >> On 10/12/20 7:19 PM, Viacheslav Ovsiienko wrote:
> >>>  int
> >>> +rte_eth_rxseg_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> >>> +			  uint16_t nb_rx_desc, unsigned int socket_id,
> >>> +			  const struct rte_eth_rxconf *rx_conf,
> >>> +			  const struct rte_eth_rxseg *rx_seg, uint16_t n_seg) {
> >>> +	int ret;
> >>> +	uint16_t seg_idx;
> >>> +	uint32_t mbp_buf_size;
> >>
> >> <start-of-dup>
> >>
> >>> +	struct rte_eth_dev *dev;
> >>> +	struct rte_eth_dev_info dev_info;
> >>> +	struct rte_eth_rxconf local_conf;
> >>> +	void **rxq;
> >>> +
> >>> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
> >>> +
> >>> +	dev = &rte_eth_devices[port_id];
> >>> +	if (rx_queue_id >= dev->data->nb_rx_queues) {
> >>> +		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n",
> rx_queue_id);
> >>> +		return -EINVAL;
> >>> +	}
> >>
> >> <end-of-dup>
> >>
> >>> +
> >>> +	if (rx_seg == NULL) {
> >>> +		RTE_ETHDEV_LOG(ERR, "Invalid null description pointer\n");
> >>> +		return -EINVAL;
> >>> +	}
> >>> +
> >>> +	if (n_seg == 0) {
> >>> +		RTE_ETHDEV_LOG(ERR, "Invalid zero description
> number\n");
> >>> +		return -EINVAL;
> >>> +	}
> >>> +
> >>> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxseg_queue_setup,
> >>> +-ENOTSUP);
> >>> +
> >>
> >> <start-of-dup>
> >>
> >>> +	/*
> >>> +	 * Check the size of the mbuf data buffer.
> >>> +	 * This value must be provided in the private data of the memory
> pool.
> >>> +	 * First check that the memory pool has a valid private data.
> >>> +	 */
> >>> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> >>> +	if (ret != 0)
> >>> +		return ret;
> >>
> >> <end-of-dup>
> >>
> >>> +
> >>> +	for (seg_idx = 0; seg_idx < n_seg; seg_idx++) {
> >>> +		struct rte_mempool *mp = rx_seg[seg_idx].mp;
> >>> +
> >>> +		if (mp->private_data_size <
> >>> +				sizeof(struct rte_pktmbuf_pool_private)) {
> >>> +			RTE_ETHDEV_LOG(ERR, "%s private_data_size %d <
> %d\n",
> >>> +				mp->name, (int)mp->private_data_size,
> >>> +				(int)sizeof(struct
> rte_pktmbuf_pool_private));
> >>> +			return -ENOSPC;
> >>> +		}
> >>> +
> >>> +		mbp_buf_size = rte_pktmbuf_data_room_size(mp);
> >>> +		if (mbp_buf_size < rx_seg[seg_idx].length +
> >>> +				   rx_seg[seg_idx].offset +
> >>> +				   (seg_idx ? 0 :
> >>> +				    (uint32_t)RTE_PKTMBUF_HEADROOM)) {
> >>> +			RTE_ETHDEV_LOG(ERR,
> >>> +				"%s mbuf_data_room_size %d < %d"
> >>> +				" (segment length=%d + segment
> offset=%d)\n",
> >>> +				mp->name, (int)mbp_buf_size,
> >>> +				(int)(rx_seg[seg_idx].length +
> >>> +				      rx_seg[seg_idx].offset),
> >>> +				(int)rx_seg[seg_idx].length,
> >>> +				(int)rx_seg[seg_idx].offset);
> >>> +			return -EINVAL;
> >>> +		}
> >>> +	}
> >>> +
> >>
> >> <start-of-huge-dup>
> >>
> >>> +	/* Use default specified by driver, if nb_rx_desc is zero */
> >>> +	if (nb_rx_desc == 0) {
> >>> +		nb_rx_desc = dev_info.default_rxportconf.ring_size;
> >>> +		/* If driver default is also zero, fall back on EAL default */
> >>> +		if (nb_rx_desc == 0)
> >>> +			nb_rx_desc =
> RTE_ETH_DEV_FALLBACK_RX_RINGSIZE;
> >>> +	}
> >>> +
> >>> +	if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
> >>> +			nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
> >>> +			nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
> >>> +
> >>> +		RTE_ETHDEV_LOG(ERR,
> >>> +			"Invalid value for nb_rx_desc(=%hu), should be: "
> >>> +			"<= %hu, >= %hu, and a product of %hu\n",
> >>> +			nb_rx_desc, dev_info.rx_desc_lim.nb_max,
> >>> +			dev_info.rx_desc_lim.nb_min,
> >>> +			dev_info.rx_desc_lim.nb_align);
> >>> +		return -EINVAL;
> >>> +	}
> >>> +
> >>> +	if (dev->data->dev_started &&
> >>> +		!(dev_info.dev_capa &
> >>> +			RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP))
> >>> +		return -EBUSY;
> >>> +
> >>> +	if (dev->data->dev_started &&
> >>> +		(dev->data->rx_queue_state[rx_queue_id] !=
> >>> +			RTE_ETH_QUEUE_STATE_STOPPED))
> >>> +		return -EBUSY;
> >>> +
> >>> +	rxq = dev->data->rx_queues;
> >>> +	if (rxq[rx_queue_id]) {
> >>> +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >rx_queue_release,
> >>> +					-ENOTSUP);
> >>> +		(*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]);
> >>> +		rxq[rx_queue_id] = NULL;
> >>> +	}
> >>> +
> >>> +	if (rx_conf == NULL)
> >>> +		rx_conf = &dev_info.default_rxconf;
> >>> +
> >>> +	local_conf = *rx_conf;
> >>> +
> >>> +	/*
> >>> +	 * If an offloading has already been enabled in
> >>> +	 * rte_eth_dev_configure(), it has been enabled on all queues,
> >>> +	 * so there is no need to enable it in this queue again.
> >>> +	 * The local_conf.offloads input to underlying PMD only carries
> >>> +	 * those offloadings which are only enabled on this queue and
> >>> +	 * not enabled on all queues.
> >>> +	 */
> >>> +	local_conf.offloads &= ~dev->data->dev_conf.rxmode.offloads;
> >>> +
> >>> +	/*
> >>> +	 * New added offloadings for this queue are those not enabled in
> >>> +	 * rte_eth_dev_configure() and they must be per-queue type.
> >>> +	 * A pure per-port offloading can't be enabled on a queue while
> >>> +	 * disabled on another queue. A pure per-port offloading can't
> >>> +	 * be enabled for any queue as new added one if it hasn't been
> >>> +	 * enabled in rte_eth_dev_configure().
> >>> +	 */
> >>> +	if ((local_conf.offloads & dev_info.rx_queue_offload_capa) !=
> >>> +	     local_conf.offloads) {
> >>> +		RTE_ETHDEV_LOG(ERR,
> >>> +			"Ethdev port_id=%d rx_queue_id=%d, new added
> offloads"
> >>> +			" 0x%"PRIx64" must be within per-queue offload"
> >>> +			" capabilities 0x%"PRIx64" in %s()\n",
> >>> +			port_id, rx_queue_id, local_conf.offloads,
> >>> +			dev_info.rx_queue_offload_capa,
> >>> +			__func__);
> >>> +		return -EINVAL;
> >>> +	}
> >>> +
> >>> +	/*
> >>> +	 * If LRO is enabled, check that the maximum aggregated packet
> >>> +	 * size is supported by the configured device.
> >>> +	 */
> >>> +	if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> >>> +		if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
> >>> +			dev->data->dev_conf.rxmode.max_lro_pkt_size =
> >>> +				dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> >>> +		int ret = check_lro_pkt_size(port_id,
> >>> +				dev->data-
> >dev_conf.rxmode.max_lro_pkt_size,
> >>> +				dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> >>> +				dev_info.max_lro_pkt_size);
> >>> +		if (ret != 0)
> >>> +			return ret;
> >>> +	}
> >>
> >> <end-of-huge-dup>
> >>
> >> IMO It is not acceptable to duplication so much code.
> >> It is simply unmaintainable.
> >>
> >> NACK
> >
> > Can it be solved by making rte_eth_rx_queue_setup() a wrapper on top
> > of this new rte_eth_rxseg_queue_setup() ?
> >
> 
> Could be, but strictly speaking it will break arguments validation order and
> error reporting in various cases.
> So, refactoring is required to keep it consistent.


More information about the dev mailing list