DPDK hugepages
Lombardo, Ed
Ed.Lombardo at netscout.com
Thu May 25 07:36:02 CEST 2023
Hi,
I have two DPDK processes in our application, where one process allocates 1024 2MB hugepages and the second process allocates 8 1GB hugepages.
I am allocating hugepages in a script before the application starts. This is to satisfy different configuration settings and I don't want to write to grub when second DPDK process is enabled or disabled.
Script that preconditions the hugepages:
Process 1:
mkdir /mnt/huge
mount -t hugetlbfs -o pagesize=2M nodev /mnt/huge
echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
Process 2:
mkdir /dev/hugepages-1024
mount -t hugetlbfs -o pagesize=1G none /dev/hugepages-1024
echo 8 >/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
Application -
Process 1 DPDK EAL arguments:
Const char *argv[] = { "app1", "-c", "7fc", "-n", "4", "--huge-dir", "/dev/hugepages-1024", "--proc-type", "secondary"};
Process 2 DPDK EAL arguments:
const char *dpdk_argv_2gb[6] = {"app1 ", "-c0x2", "-n4" , "--socket-mem=2048", "--huge-dir /mnt/huge", "--proc-type primary"};
Questions:
1. Does DPDK support two hugepage sizes (2MB and 1GB) sharing app1?
2. Do I need to specify -proc-type for each Process shown above for argument to the rte_eal_init()?
3. I find the files in /dev/hugpages/rtemap_#s are not present once Process 2 hugepages-1G/nr_hugepages are set to 8, but when set value to 1 the /dev/hugepages/rtemap_# files (1024) are present. I can't see how to resolve this issue. Any suggestions?
4. Do I need to set -socket-mem to the total memory of both Processes, or are they separately defined? I have one NUMA node in this VM.
Thanks,
Ed
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mails.dpdk.org/archives/users/attachments/20230525/97193ec7/attachment-0001.htm>
More information about the users
mailing list