[PATCH 2/3] mem: fix ASan shadow for remapped memory segments

Burakov, Anatoly anatoly.burakov at intel.com
Wed Apr 20 16:47:19 CEST 2022


On 15-Apr-22 6:31 PM, David Marchand wrote:
> When releasing some memory, the allocator can choose to return some
> pages to the OS. At the same time, this memory was poisoned in ASAn
> shadow. Doing the latter made it impossible to remap this same page
> later.
> On the other hand, without this poison, the OS would pagefault in any
> case for this page.
> 
> Remove the poisoning for unmapped pages.
> 
> Bugzilla ID: 994
> Fixes: 6cc51b1293ce ("mem: instrument allocator for ASan")
> Cc: stable at dpdk.org
> 
> Signed-off-by: David Marchand <david.marchand at redhat.com>
> ---
>   lib/eal/common/malloc_elem.h |  4 ++++
>   lib/eal/common/malloc_heap.c | 12 +++++++++++-
>   2 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/eal/common/malloc_elem.h b/lib/eal/common/malloc_elem.h
> index 228f178418..b859003722 100644
> --- a/lib/eal/common/malloc_elem.h
> +++ b/lib/eal/common/malloc_elem.h
> @@ -272,6 +272,10 @@ old_malloc_size(struct malloc_elem *elem)
>   
>   #else /* !RTE_MALLOC_ASAN */
>   
> +static inline void
> +asan_set_zone(void *ptr __rte_unused, size_t len __rte_unused,
> +	uint32_t val __rte_unused) { }
> +
>   static inline void
>   asan_set_freezone(void *ptr __rte_unused, size_t size __rte_unused) { }
>   
> diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
> index 6c572b6f2c..5913d9f862 100644
> --- a/lib/eal/common/malloc_heap.c
> +++ b/lib/eal/common/malloc_heap.c
> @@ -860,6 +860,7 @@ malloc_heap_free(struct malloc_elem *elem)
>   	size_t len, aligned_len, page_sz;
>   	struct rte_memseg_list *msl;
>   	unsigned int i, n_segs, before_space, after_space;
> +	bool unmapped_pages = false;
>   	int ret;
>   	const struct internal_config *internal_conf =
>   		eal_get_internal_configuration();
> @@ -999,6 +1000,13 @@ malloc_heap_free(struct malloc_elem *elem)
>   
>   		/* don't care if any of this fails */
>   		malloc_heap_free_pages(aligned_start, aligned_len);
> +		/*
> +		 * Clear any poisoning in ASan for the associated pages so that
> +		 * next time EAL maps those pages, the allocator can access
> +		 * them.
> +		 */
> +		asan_set_zone(aligned_start, aligned_len, 0x00);
> +		unmapped_pages = true;
>   
>   		request_sync();
>   	} else {
> @@ -1032,7 +1040,9 @@ malloc_heap_free(struct malloc_elem *elem)
>   
>   	rte_mcfg_mem_write_unlock();
>   free_unlock:
> -	asan_set_freezone(asan_ptr, asan_data_len);
> +	/* Poison memory range if belonging to some still mapped pages. */
> +	if (!unmapped_pages)
> +		asan_set_freezone(asan_ptr, asan_data_len);
>   
>   	rte_spinlock_unlock(&(heap->lock));
>   	return ret;

I suspect the patch should be a little more complicated than that. When 
we unmap pages, we don't necessarily unmap the entire malloc element, it 
could be that we have a freed allocation like so:

| malloc header | free space | unmapped space | free space | next malloc 
header |

So, i think the freezone should be set from asan_ptr till aligned_start, 
and then from (aligned_start + aligned_len) till (asan_ptr + 
asan_data_len). Does that make sense?

-- 
Thanks,
Anatoly


More information about the dev mailing list