[dpdk-dev] DPDK mbuf pool in SR-IOV env and one RX/TX queue
saurabh.globe at gmail.com
Thu Jan 28 05:39:26 CET 2016
Any clues or hint on how to debug this kind of problem on SR-IOV? Only
primary can send packet but secondary process couldn't. I have verified
host's qprc and qptc counters on PF and they do increment.
SR-IOV with DPDK seems more challenging than PCI pass through of whole NIC.
On Jan 26, 2016 12:19 PM, "Bruce Richardson" <bruce.richardson at intel.com>
> On Mon, Jan 25, 2016 at 04:15:28PM -0800, Saurabh Mishra wrote:
> > Hi Bruce --
> > >The sharing of the mbuf pool is not an issue, but sharing of rx/tx
> > is.
> > >The ethdev queues are not multi-thread safe, so to share a queue between
> > processes
> > >or threads, you need to put in locks or other access control mechanisms.
> > [This
> > >also implies a performance hit due to the locking]
> > >Regards,
> > >/Bruce
> > Right. So now we have only one process to do rx/tx on queue 0 if we
> > that max queue support is 1.
> > However, we have noticed that if our process, which does rx/tx, is not
> > primary, then we can't transmit the packet out with SR-IOV.
> > Is there any specific limitation on SR-IOV (the vf driver in dpdk) that
> > only primary process should receive and transmit packets?
> > In our model, we have an agent process which monitor links and another
> > process which does packet processing. If we make our agent process as
> > primary then our secondary process is not able to send the packets --
> > rte_eth_tx_burst() succeed but recipient does not receive the packet.
> > Thanks,
> > /Saurabh
> There should be no restrictions on RX/TX from secondary processes.
More information about the dev