hugepage mapping to memseg failure
Lombardo, Ed
Ed.Lombardo at netscout.com
Wed Sep 11 06:26:39 CEST 2024
Hi Dmitry,
Legacy memory mode was one way to reduce the VIRT memory drastically. Our application restricts and locks down memory for performance purposes. We need to continue to offer our customers our virtual application with minimum of 16 GB of memory. With DPDK 22.11 the VIRT memory jumped to 66 GB.
The VIRT memory jump caused problems with our application startup. You had helped me reduce VIRT memory to ~8GB and this came close to what DPDK 17.11 provided us.
Thanks,
Ed
-----Original Message-----
From: Dmitry Kozlyuk <dmitry.kozliuk at gmail.com>
Sent: Tuesday, September 10, 2024 6:37 PM
To: Lombardo, Ed <Ed.Lombardo at netscout.com>
Cc: users <users at dpdk.org>
Subject: Re: hugepage mapping to memseg failure
External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
2024-09-10 20:42 (UTC+0000), Lombardo, Ed:
> Hi Dmitry,
> If I use grub for hugepages will the hugepages always be contiguous and we won’t see the mapping to memsegs issue?
There are no guarantees about physical addresses.
On bare metal, getting continuous addresses at system startup is more likely.
On VM, I think, it is always less likely because host memory is fragmented.
> I am investigating your option 2 you provided to see how much VIRT memory increases.
There might be a third option.
If your HW and hypervisor permit accessing IOMMU from guests and if the NIC can be bound to vfio-pci driver, then you could use IOVA-as-VA (--iova-mode=va) and have no issues with physical addresses ever.
Out of curiosity, why legacy memory mode is preferable for your app?
More information about the users
mailing list