[dpdk-dev] [BUG] net/af_xdp: Current code can only create one af_xdp device

Markus Theil markus.theil at tu-ilmenau.de
Wed Apr 24 22:33:35 CEST 2019


Hi Xiaolong,

with only one vdev everything works. It stops working if I use two
vdevs. Both interfaces were brought up before testing.

Best regards,
Markus

On 24.04.19 16:47, Ye Xiaolong wrote:
> Hi, Markus
>
> On 04/24, Markus Theil wrote:
>> Hi Xiaolong,
>>
>> I also tested with i40e devices, with the same result.
>>
>> ./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev
>> net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1
>> EAL: Detected 16 lcore(s)
>> EAL: Detected 1 NUMA nodes
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: No free hugepages reported in hugepages-2048kB
>> EAL: No available hugepages reported in hugepages-2048kB
>> EAL: Probing VFIO support...
>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0
>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1
>> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456,
>> size=2176, socket=0
>> testpmd: preferred mempool ops selected: ring_mp_mc
>> Configuring Port 0 (socket 0)
>> Port 0: 3C:FD:FE:A3:E7:30
>> Configuring Port 1 (socket 0)
>> xsk_configure(): Failed to create xsk socket. (-1)
>> eth_rx_queue_setup(): Failed to configure xdp socket
>> Fail to configure port 1 rx queues
>> EAL: Error - exiting with code: 1
>>   Cause: Start ports failed
>>
> What about one vdev instance on your side? And have you brought up the interface?
> xsk_configure requires the interface to be up state.
>
> dsd
> Thanks,
> Xiaolong
>
>
>> If I execute the same call again, I get error -16 already on the first port:
>>
>> ./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev
>> net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1
>> EAL: Detected 16 lcore(s)
>> EAL: Detected 1 NUMA nodes
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: No free hugepages reported in hugepages-2048kB
>> EAL: No available hugepages reported in hugepages-2048kB
>> EAL: Probing VFIO support...
>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0
>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1
>> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456,
>> size=2176, socket=0
>> testpmd: preferred mempool ops selected: ring_mp_mc
>> Configuring Port 0 (socket 0)
>> xsk_configure(): Failed to create xsk socket. (-16)
>> eth_rx_queue_setup(): Failed to configure xdp socket
>> Fail to configure port 0 rx queues
>> EAL: Error - exiting with code: 1
>>   Cause: Start ports failed
>>
>> Software versions/commits/infos:
>>
>> - Linux 5.1-rc6
>> - DPDK 7f251bcf22c5729792f9243480af1b3c072876a5 (19.05-rc2)
>> - libbpf from https://github.com/libbpf/libbpf
>> (910c475f09e5c269f441d7496c27dace30dc2335)
>> - DPDK and libbpf build with meson
>>
>> Best regards,
>> Markus
>>
>> On 4/24/19 8:35 AM, Ye Xiaolong wrote:
>>> Hi, Markus
>>>
>>> On 04/23, Markus Theil wrote:
>>>> Hi Xiaolong,
>>>>
>>>> I tested your commit "net/af_xdp: fix creating multiple instance" on the
>>>> current master branch. It does not work for me in the following minimal
>>>> test setting:
>>>>
>>>> 1) allocate 2x 1GB huge pages for DPDK
>>>>
>>>> 2) ip link add p1 type veth peer name p2
>>>>
>>>> 3) ./dpdk-testpmd --vdev=net_af_xdp0,iface=p1
>>>> --vdev=net_af_xdp1,iface=p2 (I also tested this with two igb devices,
>>>> with the same errors)
>>> I've tested 19.05-rc2, started testpmd with 2 af_xdp vdev (with two i40e devices),
>>> and it works for me.
>>>
>>> $ ./x86_64-native-linuxapp-gcc/app/testpmd -l 5,6 -n 4 --log-level=pmd.net.af_xdp:info -b 82:00.1 --no-pci --vdev net_af_xdp0,iface=ens786f1 --vdev net_af_xdp1,iface=ens786f0
>>> EAL: Detected 88 lcore(s)
>>> EAL: Detected 2 NUMA nodes
>>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>>> EAL: Probing VFIO support...
>>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0
>>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1
>>> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
>>> testpmd: preferred mempool ops selected: ring_mp_mc
>>> Configuring Port 0 (socket 0)
>>> Port 0: 3C:FD:FE:C5:E2:41
>>> Configuring Port 1 (socket 0)
>>> Port 1: 3C:FD:FE:C5:E2:40
>>> Checking link statuses...
>>> Done
>>> No commandline core given, start packet forwarding
>>> io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
>>> Logical Core 6 (socket 0) forwards packets on 2 streams:
>>>   RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
>>>   RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>>>
>>>   io packet forwarding packets/burst=32
>>>   nb forwarding cores=1 - nb forwarding ports=2
>>>   port 0: RX queue number: 1 Tx queue number: 1
>>>     Rx offloads=0x0 Tx offloads=0x0
>>>     RX queue: 0
>>>       RX desc=0 - RX free threshold=0
>>>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>>       RX Offloads=0x0
>>>     TX queue: 0
>>>       TX desc=0 - TX free threshold=0
>>>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>>       TX offloads=0x0 - TX RS bit threshold=0
>>>   port 1: RX queue number: 1 Tx queue number: 1
>>>     Rx offloads=0x0 Tx offloads=0x0
>>>     RX queue: 0
>>>       RX desc=0 - RX free threshold=0
>>>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>>       RX Offloads=0x0
>>>     TX queue: 0
>>>       TX desc=0 - TX free threshold=0
>>>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>>       TX offloads=0x0 - TX RS bit threshold=0
>>> Press enter to exit
>>>
>>> Could you paste your whole failure log here?
>>>> I'm using Linux 5.1-rc6 and an up to date libbpf. The setup works for
>>>> the first device and fails for the second device when creating bpf maps
>>>> in libbpf ("qidconf_map" or "xsks_map"). It seems, that these maps also
>>>> need unique names and cannot exist twice under the same name.
>>> So far as I know, there should not be such contraint, the bpf maps creations 
>>> are wrapped in libbpf.
>>>
>>>> Furthermore if running step 3 again after it failed for the first time,
>>>> xdp vdev allocation already fails for the first xdp vdev and does not
>>>> reach the second one. Please let me know if you need some program output
>>>> or more information from me.
>>>>
>>>> Best regards,
>>>> Markus
>>>>
>>> Thanks,
>>> Xiaolong
>>>
>>>> On 4/18/19 3:05 AM, Ye Xiaolong wrote:
>>>>> Hi, Markus
>>>>>
>>>>> On 04/17, Markus Theil wrote:
>>>>>> I tested the new af_xdp based device on the current master branch and
>>>>>> noticed, that the usage of static mempool names allows only for the
>>>>>> creation of a single af_xdp vdev. If a second vdev of the same type gets
>>>>>> created, the mempool allocation fails.
>>>>> Thanks for reporting, could you paste the cmdline you used and the error log?
>>>>> Are you referring to ring creation or mempool creation?
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Xiaolong
>>>>>> Best regards,
>>>>>> Markus Theil

-- 
Markus Theil

Technische Universität Ilmenau, Fachgebiet Telematik/Rechnernetze
Postfach 100565
98684 Ilmenau, Germany

Phone: +49 3677 69-4582
Email: markus[dot]theil[at]tu-ilmenau[dot]de
Web: http://www.tu-ilmenau.de/telematik



More information about the dev mailing list