[PATCH] dma/idxd: add support for multi-process when using VFIO
Bruce Richardson
bruce.richardson at intel.com
Mon May 15 18:29:07 CEST 2023
When using vfio-pci/uio for hardware access, we need to avoid
reinitializing the hardware when mapping from a secondary process.
Instead, just configure the function pointers and reuse the data
mappings from the primary process.
With the code change, update driver doc with the information that
vfio-pci can be used for multi-process support, and explicitly state the
limitation on multi-process support being unavailable when using idxd
kernel driver.
Signed-off-by: Bruce Richardson <bruce.richardson at intel.com>
---
doc/guides/dmadevs/idxd.rst | 5 +++++
drivers/dma/idxd/idxd_common.c | 6 ++++--
drivers/dma/idxd/idxd_pci.c | 30 ++++++++++++++++++++++++++++++
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/dmadevs/idxd.rst b/doc/guides/dmadevs/idxd.rst
index bdfd3e78ad..f75d1d0a85 100644
--- a/doc/guides/dmadevs/idxd.rst
+++ b/doc/guides/dmadevs/idxd.rst
@@ -35,6 +35,11 @@ Device Setup
Intel\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers,
such as ``vfio-pci``. Both are supported by the IDXD PMD.
+.. note::
+ To use Intel\ |reg| DSA devices in DPDK multi-process applications,
+ the devices should be bound to the vfio-pci driver.
+ Multi-process is not supported when using the kernel IDXD driver.
+
Intel\ |reg| DSA devices using IDXD kernel driver
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c
index 6fe8ad4884..83d53942eb 100644
--- a/drivers/dma/idxd/idxd_common.c
+++ b/drivers/dma/idxd/idxd_common.c
@@ -599,6 +599,10 @@ idxd_dmadev_create(const char *name, struct rte_device *dev,
dmadev->fp_obj->completed = idxd_completed;
dmadev->fp_obj->completed_status = idxd_completed_status;
dmadev->fp_obj->burst_capacity = idxd_burst_capacity;
+ dmadev->fp_obj->dev_private = dmadev->data->dev_private;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
idxd = dmadev->data->dev_private;
*idxd = *base_idxd; /* copy over the main fields already passed in */
@@ -619,8 +623,6 @@ idxd_dmadev_create(const char *name, struct rte_device *dev,
idxd->batch_idx_ring = (void *)&idxd->batch_comp_ring[idxd->max_batches+1];
idxd->batch_iova = rte_mem_virt2iova(idxd->batch_comp_ring);
- dmadev->fp_obj->dev_private = idxd;
-
idxd->dmadev->state = RTE_DMA_DEV_READY;
return 0;
diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index 781fa02db3..5fe9314d01 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -309,6 +309,36 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
IDXD_PMD_INFO("Init %s on NUMA node %d", name, dev->device.numa_node);
dev->device.driver = &drv->driver;
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ char qname[32];
+ int max_qid;
+
+ /* look up queue 0 to get the pci structure */
+ snprintf(qname, sizeof(qname), "%s-q0", name);
+ IDXD_PMD_INFO("Looking up %s\n", qname);
+ ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
+ if (ret != 0) {
+ IDXD_PMD_ERR("Failed to create dmadev %s", name);
+ return ret;
+ }
+ qid = rte_dma_get_dev_id_by_name(qname);
+ max_qid = rte_atomic16_read(
+ &((struct idxd_dmadev *)rte_dma_fp_objs[qid].dev_private)->u.pci->ref_count);
+
+ /* we have queue 0 done, now configure the rest of the queues */
+ for (qid = 1; qid < max_qid; qid++) {
+ /* add the queue number to each device name */
+ snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
+ IDXD_PMD_INFO("Looking up %s\n", qname);
+ ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
+ if (ret != 0) {
+ IDXD_PMD_ERR("Failed to create dmadev %s", name);
+ return ret;
+ }
+ }
+ return 0;
+ }
+
if (dev->device.devargs && dev->device.devargs->args[0] != '\0') {
/* if the number of devargs grows beyond just 1, use rte_kvargs */
if (sscanf(dev->device.devargs->args,
--
2.39.2
More information about the dev
mailing list