[dpdk-dev] one worker reading multiple ports

Newman Poborsky newman555p at gmail.com
Fri Nov 21 23:55:13 CET 2014


Nice guess :)  After adding check with rte_mempool_empty(), as soon as I
enable second port for reading, it shows that the mempool is empty. Thank
you for help!

On Fri, Nov 21, 2014 at 3:44 PM, Bruce Richardson <
bruce.richardson at intel.com> wrote:

> On Fri, Nov 21, 2014 at 03:03:25PM +0100, Newman Poborsky wrote:
> > So, since mempool is multi-consumer (by default), if one is used to
> > configure queues on multiple NICs that have different socket owners, then
> > mbuf allocation will fail? But if 2 NICs have the socket owner,
> everything
> > should work fine?  Since I'm talking about 2 ports on the same NIC, they
> > must have the same owner, RX receive should work with RX queues
> configured
> > with the same mempool, right? But it my case it doesn't so I guess I'm
> > missing something.
>
> Actually, the mempools will work with NICs on multiple sockets - it's just
> that performance is likely to suffer due to QPI usage. The mempools being
> on
> one socket or the other is not going to break your application.
>
> >
> > Any idea how can I troubleshoot why allocation fails with one mempool and
> > works fine with each queue having its own mempool?
>
> At a guess, I'd say that your mempools just aren't bit enough. Try
> doubling the
> size of th mempool in the single-pool case and see if it helps things.
>
> /Bruce
>
> >
> > Thank you,
> >
> > Newman
> >
> > On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall <mhall at mhcomputing.net>
> > wrote:
> >
> > > On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> > > > Thank you for your answer.
> > > >
> > > > I just realized that the reason the rte_eth_rx_burst() returns 0 is
> > > because
> > > > inside ixgbe_recv_pkts() this fails:
> > > > nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> > > >
> > > > Does this mean that every RX queue should have its own rte_mempool?
> If
> > > so,
> > > > are there any optimal values for: number of RX descriptors, per-queue
> > > > rte_mempool size, number of hugepages (from what I understand, these
> 3
> > > are
> > > > correlated)?
> > > >
> > > > If I'm wrong, please explain why.
> > > >
> > > > Thanks!
> > > >
> > > > BR,
> > > > Newman
> > >
> > > Newman,
> > >
> > > Mempools are created per NUMA node (ordinarily this means per processor
> > > socket
> > > if sockets > 1).
> > >
> > > When doing Tx / Rx Queue Setup, one should determine the socket which
> owns
> > > the
> > > given PCI NIC, and try to use memory on that same socket to handle
> traffic
> > > for
> > > that NIC and Queues.
> > >
> > > So, for N cards with Q * N Tx / Rx queues, you only need S mempools.
> > >
> > > Then each of the Q * N queues will use the mempool from the socket
> closest
> > > to
> > > the card.
> > >
> > > Matthew.
> > >
>


More information about the dev mailing list