[dpdk-dev] one worker reading multiple ports

Newman Poborsky newman555p at gmail.com
Fri Nov 21 15:03:25 CET 2014


So, since mempool is multi-consumer (by default), if one is used to
configure queues on multiple NICs that have different socket owners, then
mbuf allocation will fail? But if 2 NICs have the socket owner, everything
should work fine?  Since I'm talking about 2 ports on the same NIC, they
must have the same owner, RX receive should work with RX queues configured
with the same mempool, right? But it my case it doesn't so I guess I'm
missing something.

Any idea how can I troubleshoot why allocation fails with one mempool and
works fine with each queue having its own mempool?

Thank you,

Newman

On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall <mhall at mhcomputing.net>
wrote:

> On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> > Thank you for your answer.
> >
> > I just realized that the reason the rte_eth_rx_burst() returns 0 is
> because
> > inside ixgbe_recv_pkts() this fails:
> > nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> >
> > Does this mean that every RX queue should have its own rte_mempool?  If
> so,
> > are there any optimal values for: number of RX descriptors, per-queue
> > rte_mempool size, number of hugepages (from what I understand, these 3
> are
> > correlated)?
> >
> > If I'm wrong, please explain why.
> >
> > Thanks!
> >
> > BR,
> > Newman
>
> Newman,
>
> Mempools are created per NUMA node (ordinarily this means per processor
> socket
> if sockets > 1).
>
> When doing Tx / Rx Queue Setup, one should determine the socket which owns
> the
> given PCI NIC, and try to use memory on that same socket to handle traffic
> for
> that NIC and Queues.
>
> So, for N cards with Q * N Tx / Rx queues, you only need S mempools.
>
> Then each of the Q * N queues will use the mempool from the socket closest
> to
> the card.
>
> Matthew.
>


More information about the dev mailing list