[dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution

humin (Q) humin29 at huawei.com
Mon Feb 24 07:35:20 CET 2020


We found that if OS transparent hugepage uses non 'always', mlockall function in the main function of testpmd takes more than 25s to execute.

The results of running on both x86 and ARM are the same. It's very unreasonable and deadly. The enable status of transparent hugepage on OS can be viewed by the following command.

[root at X6000-C23-1U dpdk]#cat /sys/kernel/mm/transparent_hugepage/enabled

always [madvise] never



Transparent hugepage on ARM is configured as 'madvise', 'never' or 'always', testpmd runs with using strace as follows:

******************************* Transparent hugepage is configured as 'madvise'  ******************************* [root at X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.736362>

<---------------------- Hang for 25 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd>

testpmd> quit



Bye...

+++ exited with 0 +++



*****************************  Transparent hugepage is configured as 'never'  ********************************* [root at X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.737757>

<---------------------- Hang for 25 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++



*****************************  Transparent hugepage is configured as 'always'  ********************************* [root at X6000-C23-1U dpdk]# strace -T -e trace=mlockall testpmd -l 1-4 -w

0000:7d:01.0 --iova-mode=va -- -i

strace: Can't stat 'testpmd': No such file or directory [root at X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <0.208571>

<---------------------- No Hang

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++

**********************************************************************************************************************



We have also seen some discussions on this issue in following page:

https://bugzilla.redhat.com/show_bug.cgi?id=1786923



David Marchand has a related patch, as following page:

https://github.com/david-marchand/dpdk/commit/f9e1b9fa101c9f4f16c0717401a55790aecc6484

but this patch doesn't seem to have been submitted to the community.

Transparent hugepage on ARM is configured as 'madvise' or 'never', testpmd runs with using strace as follows:

*******************************************************

[root at X6000-C23-1U app]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w

0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE|MCL_ONFAULT) = 0 <1.955947>

<---------------------- Hang for less than 2 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++



We'd like to know what is the current development on this issue in dpdk community. Thanks



Best Regards





More information about the dev mailing list