[dpdk-dev] [PATCH 24/38] net/dpaa: add support for Tx and Rx queue setup

Ferruh Yigit ferruh.yigit at intel.com
Wed Jun 28 17:45:07 CEST 2017


On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal at nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain at nxp.com>
> ---
>  doc/guides/nics/features/dpaa.ini |   1 +
>  drivers/net/dpaa/Makefile         |   4 +
>  drivers/net/dpaa/dpaa_ethdev.c    | 279 ++++++++++++++++++++++++++++++++-
>  drivers/net/dpaa/dpaa_ethdev.h    |   6 +
>  drivers/net/dpaa/dpaa_rxtx.c      | 313 ++++++++++++++++++++++++++++++++++++++
>  drivers/net/dpaa/dpaa_rxtx.h      |  61 ++++++++

This patch adds initial rx/tx support, as well as rx/tx queue support as
mentioned in patch subject.

I would be for splitting patch, but even if patch not splitted, I would
suggest updating patch suject and commit log to cover patch content.

<...>
> --- a/doc/guides/nics/features/dpaa.ini
> +++ b/doc/guides/nics/features/dpaa.ini
> @@ -4,5 +4,6 @@
>  ; Refer to default.ini for the full list of available PMD features.
>  ;
>  [Features]
> +Queue start/stop     = Y

This requires following dev_ops implemented:
rx_queue_start, rx_queue_stop, tx_queue_start, tx_queue_stop

>  ARMv8                = Y
>  Usage doc            = Y

<...>

> +
> +	/* Initialize Rx FQ's */
> +	if (getenv("DPAA_NUM_RX_QUEUES"))

I think this was disscussed before, should a PMD get config options from
enviroment variable? Altough this works, I am for a more explicit
method, like dev_args.

<...>
> +
> +	dpaa_intf->rx_queues = rte_zmalloc(NULL,
> +		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);

A NULL check perhaps?

And if multi-process support desired, this should be done only for
primary process.

<...>
> +	/* Allocate memory for storing MAC addresses */
> +	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
> +		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
> +	if (eth_dev->data->mac_addrs == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to "
> +						"store MAC addresses",
> +				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);

Anything to cleanup before exit?

> +		return -ENOMEM;
> +	}

<...>
> +uint16_t dpaa_eth_queue_rx(void *q,
> +			   struct rte_mbuf **bufs,
> +			   uint16_t nb_bufs)
> +{
> +	struct qman_fq *fq = q;
> +	struct qm_dqrr_entry *dq;
> +	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
> +	int ret;
> +
> +	ret = rte_dpaa_portal_init((void *)0);
> +	if (ret) {
> +		PMD_DRV_LOG(ERR, "Failure in affining portal");
> +		return 0;
> +	}

This is rx_pkt_burst function, right? Is it Ok to call
rte_dpaa_portal_init() in Rx data path?

<...>
> +	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
> +	if (!buf)
> +		goto out;

goto is not required here.

> +
> +out:
> +	return (void *)buf;
> +}
> +

<...>
> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
> +			      struct rte_mbuf **bufs __rte_unused,
> +		uint16_t nb_bufs __rte_unused)
> +{
> +	PMD_TX_LOG(DEBUG, "Drop all packets");

Should mbufs freed here?

> +
> +	/* Drop all incoming packets. No need to free packets here
> +	 * because the rte_eth f/w frees up the packets through tx_buffer
> +	 * callback in case this functions returns count less than nb_bufs
> +	 */
> +	return 0;
> +}

<...>



More information about the dev mailing list