[dpdk-dev] [PATCH v2 2/3] vfio: fix DMA mapping granularity for type1 iova as va
Burakov, Anatoly
anatoly.burakov at intel.com
Tue Nov 10 15:17:39 CET 2020
On 05-Nov-20 9:04 AM, Nithin Dabilpuram wrote:
> Partial unmapping is not supported for VFIO IOMMU type1
> by kernel. Though kernel gives return as zero, the unmapped size
> returned will not be same as expected. So check for
> returned unmap size and return error.
>
> For IOVA as PA, DMA mapping is already at memseg size
> granularity. Do the same even for IOVA as VA mode as
> DMA map/unmap triggered by heap allocations,
> maintain granularity of memseg page size so that heap
> expansion and contraction does not have this issue.
>
> For user requested DMA map/unmap disallow partial unmapping
> for VFIO type1.
>
> Fixes: 73a639085938 ("vfio: allow to map other memory regions")
> Cc: anatoly.burakov at intel.com
> Cc: stable at dpdk.org
>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram at marvell.com>
> ---
<snip>
> @@ -525,12 +528,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len,
> /* for IOVA as VA mode, no need to care for IOVA addresses */
> if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) {
> uint64_t vfio_va = (uint64_t)(uintptr_t)addr;
> - if (type == RTE_MEM_EVENT_ALLOC)
> - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,
> - len, 1);
> - else
> - vfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,
> - len, 0);
> + uint64_t page_sz = msl->page_sz;
> +
> + /* Maintain granularity of DMA map/unmap to memseg size */
> + for (; cur_len < len; cur_len += page_sz) {
> + if (type == RTE_MEM_EVENT_ALLOC)
> + vfio_dma_mem_map(default_vfio_cfg, vfio_va,
> + vfio_va, page_sz, 1);
> + else
> + vfio_dma_mem_map(default_vfio_cfg, vfio_va,
> + vfio_va, page_sz, 0);
I think you're mapping the same address here, over and over. Perhaps you
meant `vfio_va + cur_len` for the mapping addresses?
--
Thanks,
Anatoly
More information about the dev
mailing list