[dpdk-dev] dpdk starting issue with descending virtual address allocation in new kernel

Michael Hu (NSBU) humichael at vmware.com
Thu Sep 11 00:40:36 CEST 2014


Hi All,

We have a kernel config question to consult you.
DPDK failed to start due to mbuf creation issue with new kernel 3.14.17 + grsecurity patches.
We tries to trace down the issue, it seems that  the virtual address of huge page is allocated from high address to low address by kernel where dpdk expects it to be low to high to think it is as consecutive. See dumped virt address bellow. It is first 0x710421400000, then 0x710421200000. Where previously it would be 0x710421200000 first , then 0x710421400000. But they are still consecutive.
----
Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 00:0c:29:b3:30:db
    Create: Default RX  0:0  - Memory used (MBUFs 4096 x (size 1984 + Hdr 64)) + 790720 =   8965 KB
Zone 0: name:<RG_MP_log_history>, phys:0x6ac00000, len:0x2080, virt:0x710421400000, socket_id:0, flags:0
Zone 1: name:<MP_log_history>, phys:0x6ac02080, len:0x1d10c0, virt:0x710421402080, socket_id:0, flags:0
Zone 2: name:<MALLOC_S0_HEAP_0>, phys:0x6ae00000, len:0x160000, virt:0x710421200000, socket_id:0, flags:0
Zone 3: name:<rte_eth_dev_data>, phys:0x6add3140, len:0x11a00, virt:0x7104215d3140, socket_id:0, flags:0
Zone 4: name:<rte_vmxnet3_pmd_0_shared>, phys:0x6ade4b40, len:0x300, virt:0x7104215e4b40, socket_id:0, flags:0
Zone 5: name:<rte_vmxnet3_pmd_0_queuedesc>, phys:0x6ade4e80, len:0x200, virt:0x7104215e4e80, socket_id:0, flags:0
Zone 6: name:<RG_MP_Default RX  0:0>, phys:0x6ade5080, len:0x10080, virt:0x7104215e5080, socket_id:0, flags:0
Segment 0: phys:0x6ac00000, len:2097152, virt:0x710421400000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x6ae00000, len:2097152, virt:0x710421200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: phys:0x6b000000, len:2097152, virt:0x710421000000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: phys:0x6b200000, len:2097152, virt:0x710420e00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: phys:0x6b400000, len:2097152, virt:0x710420c00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 5: phys:0x6b600000, len:2097152, virt:0x710420a00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 6: phys:0x6b800000, len:2097152, virt:0x710420800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 7: phys:0x6ba00000, len:2097152, virt:0x710420600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 8: phys:0x6bc00000, len:2097152, virt:0x710420400000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 9: phys:0x6be00000, len:2097152, virt:0x710420200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
---





Related dpdk code is in
dpdk/lib/librte_eal/linuxapp/eal/eal_memory.c  :: rte_eal_hugepage_init()
    for (i = 0; i < nr_hugefiles; i++) {
        new_memseg = 0;

        /* if this is a new section, create a new memseg */
        if (i == 0)
            new_memseg = 1;
        else if (hugepage[i].socket_id != hugepage[i-1].socket_id)
            new_memseg = 1;
        else if (hugepage[i].size != hugepage[i-1].size)
            new_memseg = 1;
        else if ((hugepage[i].physaddr - hugepage[i-1].physaddr) !=
            hugepage[i].size)
            new_memseg = 1;
        else if (((unsigned long)hugepage[i].final_va -
            (unsigned long)hugepage[i-1].final_va) != hugepage[i].size) {
            new_memseg = 1;
        }



Is this a known issue? Is there any workaround? Or Could you advise which kernel config may relate this this kernel behavior change?

Thanks,
Michael



More information about the dev mailing list