[dpdk-dev] [PATCH 2/2] vfio: modify spapr iommu support to use static window sizing

Burakov, Anatoly anatoly.burakov at intel.com
Thu Apr 30 13:34:36 CEST 2020


On 30-Apr-20 12:29 AM, David Christensen wrote:
> Current SPAPR IOMMU support code dynamically modifies the DMA window
> size in response to every new memory allocation. This is potentially
> dangerous because all existing mappings need to be unmapped/remapped in
> order to resize the DMA window, leaving hardware holding IOVA addresses
> that are not properly prepared for DMA.  The new SPAPR code statically
> assigns the DMA window size on first use, using the largest physical
> memory address when IOVA=PA and the base_virtaddr + physical memory size
> when IOVA=VA.  As a result, memory will only be unmapped when
> specifically requested.
> 
> Signed-off-by: David Christensen <drc at linux.vnet.ibm.com>
> ---

Hi David,

I haven't yet looked at the code in detail (will do so later), but some 
general comments and questions below.

> +	/* only create DMA window once */
> +	if (spapr_dma_win_len > 0)
> +		return 0;
> +
> +	if (rte_eal_iova_mode() == RTE_IOVA_PA) {
> +		/* Set the DMA window to cover the max physical address */
> +		const char proc_iomem[] = "/proc/iomem";
> +		const char str_sysram[] = "System RAM";
> +		uint64_t start, end, max = 0;
> +		char *line = NULL;
> +		char *dash, *space;
> +		size_t line_len;
> +
> +		/*
> +		 * Read "System RAM" in /proc/iomem:
> +		 * 00000000-1fffffffff : System RAM
> +		 * 200000000000-201fffffffff : System RAM
> +		 */
> +		FILE *fd = fopen(proc_iomem, "r");
> +		if (fd == NULL) {
> +			RTE_LOG(ERR, EAL, "Cannot open %s\n", proc_iomem);
> +			return -1;
> +		}
> +		/* Scan /proc/iomem for the highest PA in the system */
> +		while (getline(&line, &line_len, fd) != -1) {
> +			if (strstr(line, str_sysram) == NULL)
> +				continue;
> +
> +			space = strstr(line, " ");
> +			dash = strstr(line, "-");
> +
> +			/* Validate the format of the memory string */
> +			if (space == NULL || dash == NULL || space < dash) {
> +				RTE_LOG(ERR, EAL, "Can't parse line \"%s\" in file %s\n",
> +					line, proc_iomem);
> +				continue;
> +			}
> +
> +			start = strtoull(line, NULL, 16);
> +			end   = strtoull(dash + 1, NULL, 16);
> +			RTE_LOG(DEBUG, EAL, "Found system RAM from 0x%"
> +				PRIx64 " to 0x%" PRIx64 "\n", start, end);
> +			if (end > max)
> +				max = end;
> +		}
> +		free(line);
> +		fclose(fd);
> +
> +		if (max == 0) {
> +			RTE_LOG(ERR, EAL, "Failed to find valid \"System RAM\" entry "
> +				"in file %s\n", proc_iomem);
> +			return -1;
> +		}
> +
> +		spapr_dma_win_len = rte_align64pow2(max + 1);
> +		rte_mem_set_dma_mask(__builtin_ctzll(spapr_dma_win_len));

A quick check on my machines shows that when cat'ing /proc/iomem as 
non-root, you get zeroes everywhere, which leads me to believe that you 
have to be root to get anything useful out of /proc/iomem. Since one of 
the major selling points of VFIO is the ability to run as non-root, 
depending on iomem kind of defeats the purpose a bit.

> +		return 0;
> +
> +	} else if (rte_eal_iova_mode() == RTE_IOVA_VA) {
> +		/* Set the DMA window to base_virtaddr + system memory size */
> +		const char proc_meminfo[] = "/proc/meminfo";
> +		const char str_memtotal[] = "MemTotal:";
> +		int memtotal_len = sizeof(str_memtotal) - 1;
> +		char buffer[256];
> +		uint64_t size = 0;
> +
> +		FILE *fd = fopen(proc_meminfo, "r");
> +		if (fd == NULL) {
> +			RTE_LOG(ERR, EAL, "Cannot open %s\n", proc_meminfo);
> +			return -1;
> +		}
> +		while (fgets(buffer, sizeof(buffer), fd)) {
> +			if (strncmp(buffer, str_memtotal, memtotal_len) == 0) {
> +				size = rte_str_to_size(&buffer[memtotal_len]);
> +				break;
> +			}
> +		}
> +		fclose(fd);
> +
> +		if (size == 0) {
> +			RTE_LOG(ERR, EAL, "Failed to find valid \"MemTotal\" entry "
> +				"in file %s\n", proc_meminfo);
> +			return -1;
> +		}
> +
> +		RTE_LOG(DEBUG, EAL, "MemTotal is 0x%" PRIx64 "\n", size);
> +		/* if no base virtual address is configured use 4GB */
> +		spapr_dma_win_len = rte_align64pow2(size +
> +			(internal_config.base_virtaddr > 0 ?
> +			(uint64_t)internal_config.base_virtaddr : 1ULL << 32));
> +		rte_mem_set_dma_mask(__builtin_ctzll(spapr_dma_win_len));

I'm not sure of the algorithm for "memory size" here.

Technically, DPDK can reserve memory segments anywhere in the VA space 
allocated by memseg lists. That space may be far bigger than system 
memory (on a typical Intel server board you'd see 128GB of VA space 
preallocated even though the machine itself might only have, say, 16GB 
of RAM installed). The same applies to any other arch running on Linux, 
so the window needs to cover at least RTE_MIN(base_virtaddr, lowest 
memseglist VA address) and up to highest memseglist VA address. That's 
not even mentioning the fact that the user may register external memory 
for DMA which may cause the window to be of insufficient size to cover 
said external memory.

I also think that in general, "system memory" metric is ill suited for 
measuring VA space, because unlike system memory, the VA space is sparse 
and can therefore span *a lot* of address space even though in reality 
it may actually use very little physical memory.

-- 
Thanks,
Anatoly


More information about the dev mailing list