hugepages on both sockets
Lombardo, Ed
Ed.Lombardo at netscout.com
Sun Apr 6 02:56:45 CEST 2025
Hi Dmitry,
You pointed out a good point. I passed the literal "--socket-mem=2048,2048" in the array provided to rte_eal_init() function, where DPDK EAL tries to tokenize in place the string and it crashes trying to modify a readonly memory. I don't know why DPDK does this. But now I know, and I now see four rtemap_x files created for two sockets.
Thank you,
Ed
-----Original Message-----
From: Dmitry Kozlyuk <dmitry.kozliuk at gmail.com>
Sent: Friday, April 4, 2025 6:40 PM
To: Lombardo, Ed <Ed.Lombardo at netscout.com>; users at dpdk.org
Subject: Re: hugepages on both sockets
External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
Hi Ed,
On 05.04.2025 01:24, Lombardo, Ed wrote:
>
> Hi,
>
> I tried to pass into dpdk_eal_init() the argument
> --socket-mem=2048,2048” and I get segmentation error when strsplit()
> function is called
>
> arg_num = rte_strsplit(strval, len,
>
> arg, RTE_MAX_NUMA_NODES, ',');
>
Please forgive me for the stupid question:
"strval" points to a mutable buffer, like "char strval[] = "2048,2048", not "char *strval = "2048,2048"?
> If I pass “--socket_mem=2048”, --socket-mem=2048”, rte_eal_init() does
> not complain.
>
> Not sure if this would ensure both CPU sockets will host 2-1G
> hugepages? I suspect it doesn’t because I only see to rtemap_0 and
> rtemap_1 in /mnt/huge directory. I think I should see four total.
>
> # /opt/dpdk/dpdk-hugepages.py -s
>
> Node Pages Size Total
>
> 0 2 1Gb 2Gb
>
> 1 2 1Gb 2Gb
>
> I don’t know if I should believe the above output showing 2Gb on Numa
> Nodes 0 and 1.
>
You are correct, --socket-mem=2048 allocates 2048 MB total, spreading between nodes.
More information about the users
mailing list