[dpdk-stable] patch 'mem: fix memory initialization time' has been queued to LTS release 17.11.5

Yongseok Koh yskoh at mellanox.com
Fri Nov 30 00:09:57 CET 2018


Hi,

FYI, your patch has been queued to LTS release 17.11.5

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/01/18. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the patch applied
to the branch. If the code is different (ie: not only metadata diffs), due for example to
a change in context or macro names, please double check it.

Thanks.

Yongseok

---
>From 2aadc16b2f8323490d4017e6e77c80d56c45ca41 Mon Sep 17 00:00:00 2001
From: Alejandro Lucero <alejandro.lucero at netronome.com>
Date: Mon, 12 Nov 2018 11:18:19 +0000
Subject: [PATCH] mem: fix memory initialization time

When using large amount of hugepage based memory, doing all the
hugepages mapping can take quite significant time.

The problem is hugepages being initially mmaped to virtual addresses
which will be tried later for the final hugepage mmaping. This causes
the final mapping requiring calling mmap with another hint address which
can happen several times, depending on the amount of memory to mmap, and
which each mmmap taking more than a second.

This patch changes the hint for the initial hugepage mmaping using
a starting address which will not collide with the final mmaping.

Fixes: 293c0c4b957f ("mem: use address hint for mapping hugepages")

Signed-off-by: Alejandro Lucero <alejandro.lucero at netronome.com>
Acked-by: Anatoly Burakov <anatoly.burakov at intel.com>
Acked-by: Eelco Chaudron <echaudro at redhat.com>
Tested-by: Eelco Chaudron <echaudro at redhat.com>
---
 lib/librte_eal/linuxapp/eal/eal_memory.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index bac969a12..0675809b7 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -421,6 +421,21 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
 	}
 #endif
 
+#ifdef RTE_ARCH_64
+	/*
+	 * Hugepages are first mmaped individually and then re-mmapped to
+	 * another region for having contiguous physical pages in contiguous
+	 * virtual addresses. Setting here vma_addr for the first hugepage
+	 * mapped to a virtual address which will not collide with the second
+	 * mmaping later. The next hugepages will use increments of this
+	 * initial address.
+	 *
+	 * The final virtual address will be based on baseaddr which is
+	 * 0x100000000. We use a hint here starting at 0x200000000, leaving
+	 * another 4GB just in case, plus the total available hugepages memory.
+	 */
+	vma_addr = (char *)0x200000000 + (hpi->hugepage_sz * hpi->num_pages[0]);
+#endif
 	for (i = 0; i < hpi->num_pages[0]; i++) {
 		uint64_t hugepage_sz = hpi->hugepage_sz;
 
-- 
2.11.0



More information about the stable mailing list