[dpdk-dev] [PATCH v2 4/6] examples/ioat: port application to dmadev APIs

fengchengwen fengchengwen at huawei.com
Fri Sep 24 06:00:02 CEST 2021


On 2021/9/18 0:41, Kevin Laatz wrote:
> The dmadev library abstraction allows applications to use the same APIs for
> all DMA device drivers in DPDK. This patch updates the ioatfwd application
> to make use of the new dmadev APIs, in turn making it a generic application
> which can be used with any of the DMA device drivers.
> 
> Signed-off-by: Kevin Laatz <kevin.laatz at intel.com>
> 
> ---
> v2:
>   - dmadev api name updates following rebase
>   - use rte_config macro for max devs
>   - use PRIu64 for printing stats
> ---
>  examples/ioat/ioatfwd.c   | 239 ++++++++++++++++----------------------
>  examples/ioat/meson.build |   8 +-
>  2 files changed, 105 insertions(+), 142 deletions(-)
> 

[snip]

>  
>  typedef enum copy_mode_t {
> @@ -95,6 +94,16 @@ static copy_mode_t copy_mode = COPY_MODE_IOAT_NUM;
>   */
>  static unsigned short ring_size = 2048;
>  
> +/* global mbuf arrays for tracking DMA bufs */
> +#define MBUF_RING_SIZE	1024
> +#define MBUF_RING_MASK	(MBUF_RING_SIZE - 1)
> +struct dma_bufs {
> +	struct rte_mbuf *bufs[MBUF_RING_SIZE];
> +	struct rte_mbuf *copies[MBUF_RING_SIZE];
> +	uint16_t sent;
> +};

The dma_bufs size only hold 1024 address info, and the dmadev virtual channel ring size is 2048 default,
If the DMA cannot be moved in time, may exist overlay in dma_bufs in dma_dequeue() API.

> +static struct dma_bufs dma_bufs[RTE_DMADEV_DEFAULT_MAX_DEVS];
> +
>  /* global transmission config */
>  struct rxtx_transmission_config cfg;

[snip]

>  }
>  /* >8 End of configuration of device. */
>  
> @@ -820,18 +789,16 @@ assign_rawdevs(void)
>  
>  	for (i = 0; i < cfg.nb_ports; i++) {
>  		for (j = 0; j < cfg.ports[i].nb_queues; j++) {
> -			struct rte_rawdev_info rdev_info = { 0 };
> +			struct rte_dma_info dmadev_info = { 0 };
>  
>  			do {
> -				if (rdev_id == rte_rawdev_count())
> +				if (rdev_id == rte_dma_count_avail())
>  					goto end;
> -				rte_rawdev_info_get(rdev_id++, &rdev_info, 0);
> -			} while (rdev_info.driver_name == NULL ||
> -					strcmp(rdev_info.driver_name,
> -						IOAT_PMD_RAWDEV_NAME_STR) != 0);
> +				rte_dma_info_get(rdev_id++, &dmadev_info);
> +			} while (!rte_dma_is_valid(rdev_id));
>  
> -			cfg.ports[i].ioat_ids[j] = rdev_id - 1;
> -			configure_rawdev_queue(cfg.ports[i].ioat_ids[j]);
> +			cfg.ports[i].dmadev_ids[j] = rdev_id - 1;
> +			configure_rawdev_queue(cfg.ports[i].dmadev_ids[j]);

Tests show that if there are four dmadevs, only three dmadevs can be allocated here.

1st malloc: rdev_id=0, assign successful, and the dmadev_id=0, rdev_id=1
2st malloc: rdev_id=1, assign successful, and the dmadev_id=1, rdev_id=2
3st malloc: rdev_id=2, assign successful, and the dmadev_id=2, rdev_id=3
4st malloc: rdev_id=3, assign failed, because rte_dma_info_get(rdev_id++,...), the rdev_id is 4 and it's not a valid id.

Recommended use rte_dma_next_dev() which Bruce introduced.

>  			++nb_rawdev;
>  		}
>  	}
> @@ -840,7 +807,7 @@ assign_rawdevs(void)
>  		rte_exit(EXIT_FAILURE,
>  			"Not enough IOAT rawdevs (%u) for all queues (%u).\n",
>  			nb_rawdev, cfg.nb_ports * cfg.ports[0].nb_queues);
> -	RTE_LOG(INFO, IOAT, "Number of used rawdevs: %u.\n", nb_rawdev);
> +	RTE_LOG(INFO, DMA, "Number of used rawdevs: %u.\n", nb_rawdev);
>  }
>  /* >8 End of using IOAT rawdev API functions. */
>  

[snip]



More information about the dev mailing list