[dpdk-dev] [PATCH v25 1/6] dmadev: introduce DMA device library
Thomas Monjalon
thomas at monjalon.net
Tue Oct 12 21:09:38 CEST 2021
11/10/2021 09:33, Chengwen Feng:
> --- /dev/null
> +++ b/doc/guides/prog_guide/dmadev.rst
> @@ -0,0 +1,60 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright 2021 HiSilicon Limited
> +
> +DMA Device Library
> +==================
> +
> +The DMA library provides a DMA device framework for management and provisioning
> +of hardware and software DMA poll mode drivers, defining generic API which
> +support a number of different DMA operations.
> +
> +
> +Design Principles
> +-----------------
> +
> +The DMA framework provides a generic DMA device framework which supports both
> +physical (hardware) and virtual (software) DMA devices, as well as a generic DMA
> +API which allows DMA devices to be managed and configured, and supports DMA
> +operations to be provisioned on DMA poll mode driver.
> +
> +.. _figure_dmadev:
> +
> +.. figure:: img/dmadev.*
> +
> +The above figure shows the model on which the DMA framework is built on:
> +
> + * The DMA controller could have multiple hardware DMA channels (aka. hardware
> + DMA queues), each hardware DMA channel should be represented by a dmadev.
> + * The dmadev could create multiple virtual DMA channels, each virtual DMA
> + channel represents a different transfer context. The DMA operation request
> + must be submitted to the virtual DMA channel. e.g. Application could create
> + virtual DMA channel 0 for memory-to-memory transfer scenario, and create
> + virtual DMA channel 1 for memory-to-device transfer scenario.
When updating the doc, we would like to change a minimum of lines,
so it's better to split lines logically: after a comma, a point,
or before the next part of the sentence.
Do not hesitate to make short lines if needed.
Such change is quite fast to do, thanks.
[...]
> +* **Introduced dmadev library with:**
> +
> + * Device allocation functions.
You can drop this line, it is not a feature.
[...]
> +static int
> +dma_dev_data_prepare(void)
> +{
> + size_t size;
> +
> + if (rte_dma_devices != NULL)
> + return 0;
> +
> + size = dma_devices_max * sizeof(struct rte_dma_dev);
> + rte_dma_devices = malloc(size);
> + if (rte_dma_devices == NULL)
> + return -ENOMEM;
> + memset(rte_dma_devices, 0, size);
> +
> + return 0;
> +}
> +
> +static int
> +dma_data_prepare(void)
> +{
> + if (dma_devices_max == 0)
> + dma_devices_max = RTE_DMADEV_DEFAULT_MAX;
> + return dma_dev_data_prepare();
> +}
> +
> +static struct rte_dma_dev *
> +dma_allocate(const char *name, int numa_node, size_t private_data_size)
> +{
> + struct rte_dma_dev *dev;
> + void *dev_private;
> + int16_t dev_id;
> + int ret;
> +
> + ret = dma_data_prepare();
> + if (ret < 0) {
> + RTE_DMA_LOG(ERR, "Cannot initialize dmadevs data");
> + return NULL;
> + }
> +
> + dev = dma_find_by_name(name);
> + if (dev != NULL) {
> + RTE_DMA_LOG(ERR, "DMA device already allocated");
> + return NULL;
> + }
> +
> + dev_private = rte_zmalloc_socket(name, private_data_size,
> + RTE_CACHE_LINE_SIZE, numa_node);
> + if (dev_private == NULL) {
> + RTE_DMA_LOG(ERR, "Cannot allocate private data");
> + return NULL;
> + }
> +
> + dev_id = dma_find_free_id();
> + if (dev_id < 0) {
> + RTE_DMA_LOG(ERR, "Reached maximum number of DMA devices");
> + rte_free(dev_private);
> + return NULL;
> + }
> +
> + dev = &rte_dma_devices[dev_id];
> + rte_strscpy(dev->dev_name, name, sizeof(dev->dev_name));
> + dev->dev_id = dev_id;
> + dev->numa_node = numa_node;
> + dev->dev_private = dev_private;
> +
> + return dev;
> +}
> +
> +static void
> +dma_release(struct rte_dma_dev *dev)
> +{
> + rte_free(dev->dev_private);
> + memset(dev, 0, sizeof(struct rte_dma_dev));
> +}
> +
> +struct rte_dma_dev *
> +rte_dma_pmd_allocate(const char *name, int numa_node, size_t private_data_size)
> +{
> + struct rte_dma_dev *dev;
> +
> + if (dma_check_name(name) != 0 || private_data_size == 0)
> + return NULL;
> +
> + dev = dma_allocate(name, numa_node, private_data_size);
> + if (dev == NULL)
> + return NULL;
> +
> + dev->state = RTE_DMA_DEV_REGISTERED;
> +
> + return dev;
> +}
> +
> +int
> +rte_dma_pmd_release(const char *name)
> +{
> + struct rte_dma_dev *dev;
> +
> + if (dma_check_name(name) != 0)
> + return -EINVAL;
> +
> + dev = dma_find_by_name(name);
> + if (dev == NULL)
> + return -EINVAL;
> +
> + dma_release(dev);
> + return 0;
> +}
Trying to understand the logic of creation/destroy.
skeldma_probe
\-> skeldma_create
\-> rte_dma_pmd_allocate
\-> dma_allocate
\-> dma_data_prepare
\-> dma_dev_data_prepare
skeldma_remove
\-> skeldma_destroy
\-> rte_dma_pmd_release
\-> dma_release
app
\-> rte_dma_close
\-> skeldma_close
\-> dma_release
My only concern is that the PMD remove does not call rte_dma_close.
The PMD should check which dmadev is open for the rte_device to remove,
and close the dmadev first.
This way, no need for the function rte_dma_pmd_release,
and no need to duplicate the release process in two paths.
By the way, the function vchan_release is called only in the close function,
not in the "remove path".
More information about the dev
mailing list