[dpdk-dev] Multiple cores for DPDK behind SmartNIC

Gaëtan Rivet grive at u256.net
Tue May 5 11:08:19 CEST 2020


Hello Jonatan,

On 27/04/20 14:49 +0200, Jonatan Langlet wrote:
> Hi group,
> 
> We are building a setup with DPDK bound to VF ports of a Netronome Agilio
> CX 2x40 (NFP4000) SmartNIC.
> Netronome does some P4 processing of packets, and forwards through SR-IOV
> to host where dpdk will continue processing.
> 
> My problem: in DPDK I can not allocate more than a single RX-queue to the
> ports.
> Multiple dpdk processes can not pull from the same queue, which means that
> my dpdk setup only works with a single core.
> 
> Binding dpdk to PF ports on a simple Intel 2x10G NIC works without a
> problem, multiple RX-queues (and hence multiple cores) work fine.
> 
> 
> I bind dpdk to Netronome VF ports with the igb_uio driver.
> I have seen vfio-pci mentioned, would using this driver allow multiple
> RX-queues? We had some problems using this driver, which is why it has not
> yet been tested.
> 
> 
> If you need more information, I will be happy providing it
> 
> 
> Thanks,
> Jonatan

The VF in the guest is managed by the vendor PMD, meaning here the NFP
PMD applies. Either igb-uio or vfio-pci will only serve to expose
mappings in userland, and does not control the port. This means that the
NFP PMD is doing the work of telling the hardware to use multiple queues
and initializing them.

I am not familiar with this PMD, but reading the code quickly, I see
nfp_net_enable_queues() at drivers/net/nfp/nfp_net.c:404, that does
nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, enabled_queues), where
enabled_queues is a uin64_t describing the enabled queues, derived
from dev->data->nb_rx_queues.

I think you need to look into this first. You can use
`gdb -ex 'break nfp_net_enable_queues' -ex 'run' --args <your-app-and-args-here>`
then `p dev->data->nb_rx_queues` to check that your config is
properly passed down to the eth_dev, while initializing the PMD.

It might be your app failing to write the config, your command line
missing an --rxq=N somewhere (or whichever is the equivalent option in
your app), or a failure at init for the VF -- some HW might impose
limitations on their VF, and there it will really depend on your NIC
and driver.

You can reduce entropy by using first testpmd on your VF with --rxq=N
option, start/stop in testpmd will show you the number of used queues.

BR,
-- 
Gaëtan


More information about the dev mailing list