[dpdk-users] Failed to allocate tx pool
Cliff Burdick
shaklee3 at gmail.com
Wed Oct 31 02:56:45 CET 2018
Have you tried allocating memory on both numa nodes to rule that out?
On Tue, Oct 30, 2018, 16:40 Raghu Gangi <raghu_gangi at adaranetworks.com>
wrote:
> Hi,
>
> I am currently facing issue in brining up DPDK application. It is failing
> with the following message. rte_errno is set to 12 in this scenario. (Out
> of memory).
>
> It would be great if you can kindly point me to what am I doing
> incorrectly.
>
> I am using DPDK 2.2.0 version on ubuntu 16.04.
>
> EAL: PCI device 0000:02:00.0 on NUMA socket 0
> EAL: probe driver: 8086:1521 rte_igb_pmd
> EAL: Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:02:00.3 on NUMA socket 0
> EAL: probe driver: 8086:1521 rte_igb_pmd
> EAL: Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:82:00.0 on NUMA socket 1
> EAL: probe driver: 8086:10fb rte_ixgbe_pmd
> EAL: PCI memory mapped at 0x7fd1a7600000
> EAL: PCI memory mapped at 0x7fd1a7640000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:82:00.1 on NUMA socket 1
> EAL: probe driver: 8086:10fb rte_ixgbe_pmd
> EAL: PCI memory mapped at 0x7fd1a7644000
> EAL: PCI memory mapped at 0x7fd1a7684000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 4
> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
> RING: Cannot reserve memory
> dpdk_if_init:256: failed to allocate tx pool
>
> The DPDK bound NIC cards are on NUMA socket 1.
>
> root at rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# ./tools/dpdk_nic_bind.py
> --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:82:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=ixgbe
> 0000:82:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=ixgbe
>
> Network devices using kernel driver
> ===================================
> 0000:02:00.0 'I350 Gigabit Network Connection' if=eno1 drv=igb
> unused=igb_uio *Active*
> 0000:02:00.3 'I350 Gigabit Network Connection' if=eno2 drv=igb
> unused=igb_uio
>
> Other network devices
> =====================
> <none>
>
>
> root at rg2-14053:/home/adara/raghu_2/run# cat
> /sys/bus/pci/devices/0000\:82\:00.0/numa_node
> 1
> root at rg2-14053:/home/adara/raghu_2/run# cat
> /sys/bus/pci/devices/0000\:82\:00.1/numa_node
> 1
>
>
> DPDK huge pages are allocated on the same NUMA node 1 as shown below:
>
> root at rg2-14053:/home/adara/raghu_2/run# cat
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 0
> root at rg2-14053:/home/adara/raghu_2/run# cat
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> 128
> root at rg2-14053:/home/adara/raghu_2/run#
>
> Output of CPU Layout tool:
>
> root at rg2-14053:/home/adara/raghu_2/run# ../dpdk-2.2.0/tools/cpu_layout.py
> ============================================================
> Core and Socket Information (as reported by '/proc/cpuinfo')
> ============================================================
> cores = [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]
> sockets = [0, 1]
>
> Socket 0 Socket 1
> -------- --------
> Core 0 [0, 20] [10, 30]
>
> Core 1 [1, 21] [11, 31]
>
> Core 2 [2, 22] [12, 32]
>
> Core 3 [3, 23] [13, 33]
>
> Core 4 [4, 24] [14, 34]
>
> Core 8 [5, 25] [15, 35]
>
> Core 9 [6, 26] [16, 36]
>
> Core 10 [7, 27] [17, 37]
>
> Core 11 [8, 28] [18, 38]
>
> Core 12 [9, 29] [19, 39]
>
> Thanks,
> Raghu
>
More information about the users
mailing list