[dpdk-dev] Rx-errors with testpmd (only 75% line rate)

Michael Quicquaro michael.quicquaro at gmail.com
Fri Jan 24 00:22:34 CET 2014

Thank you, everyone, for all of your suggestions, but unfortunately I'm
still having the problem.

I have reduced the test down to using 2 cores (one is the master core) both
of which are on the socket in which the NIC's PCI slot is connected.  I am
running in rxonly mode, so I am basically just counting the packets.  I've
tried all different burst sizes.  Nothing seems to make any difference.

Since my original post, I have acquired an IXIA tester so I have better
control over my testing.   I send 250,000,000 packets to the interface.  I
am getting roughly 25,000,000 Rx-errors with every run.  I have verified
that the number of Rx-errors is consistent in the value in the RXMPC of the

Just for sanity's sake, I tried switching the cores to the other socket and
run the same test.  As expected I got more packet loss.  Roughly 87,000,000

I am running Red Hat 6.4 which uses kernel 2.6.32-358

This is a numa supported system, but whether or not I use --numa doesn't
seem to make a difference.

Looking at the Intel documentation it appears that I should be able to
easily do what I am trying to do.  Actually, the documentation infers that
I should be able to do roughly 40 Gbps with a single 2.x GHz processor core
with other configuration (memory, os, etc.) similar to my system.  It
appears to me that much of the details of these benchmarks are missing.

Can someone on this list actually verify for me that what I am trying to do
is possible and that they have done it with success?

Much appreciation for all the help.
- Michael

On Wed, Jan 22, 2014 at 3:38 PM, Robert Sanford <rsanford at prolexic.com>wrote:

> Hi Michael,
> > What can I do to trace down this problem?
> May I suggest that you try to be more selective in the core masks on the
> command line. The test app may choose some cores from "other" CPU sockets.
> Only enable cores of the one socket to which the NIC is attached.
> > It seems very similar to a
> > thread on this list back in May titled "Best example for showing
> > throughput?" where no resolution was ever mentioned in the thread.
> After re-reading *that* thread, it appears that their problem may have
> been trying to achieve ~40 Gbits/s of bandwidth (2 ports x 10 Gb Rx + 2
> ports x 10 Gb Tx), plus overhead, over a typical dual-port NIC whose total
> bus bandwidth is a maximum of 32 Gbits/s (PCI express 2.1 x8).
> --
> Regards,
> Robert

More information about the dev mailing list