[dpdk-dev] Rx-errors with testpmd (only 75% line rate)
junhanece at gmail.com
Mon Feb 10 18:34:47 CET 2014
We are also trying to purchase an IXIA traffic generator. Could you let us
know which chassis + load modules you are using so we can use that as a
reference to look for the model we need? There seems to be quite a number
of different models.
On Tue, Jan 28, 2014 at 9:31 AM, Dmitry Vyal <dmitryvyal at gmail.com> wrote:
> On 01/28/2014 12:00 AM, Michael Quicquaro wrote:
>> I cannot thank you enough for this information. This too was my main
>> problem. I put a "small" unmeasured delay before the call to
>> rte_eth_rx_burst() and suddenly it starts returning bursts of 512 packets
>> vs. 4!!
>> Best Regards,
> Thanks for confirming my guesses! By the way, make sure the number of
> packets you receive in a single burst is less than configured queue size.
> Or you will lose packets too. Maybe your "small" delay in not so small :)
> For my own purposes I use a delay of about 150usecs.
> P.S. I wonder why this issue is not mentioned in documentation. Is it
> evident for everyone doing network programming?
>> On Wed, Jan 22, 2014 at 9:52 AM, Dmitry Vyal <dmitryvyal at gmail.com<mailto:
>> dmitryvyal at gmail.com>> wrote:
>> Hello MIchael,
>> I suggest you to check average burst sizes on receive queues.
>> Looks like I stumbled upon a similar issue several times. If you
>> are calling rte_eth_rx_burst too frequently, NIC begins losing
>> packets no matter how many CPU horse power you have (more you
>> have, more it loses, actually). In my case this situation occured
>> when average burst size is less than 20 packets or so. I'm not
>> sure what's the reason for this behavior, but I observed it on
>> several applications on Intel 82599 10Gb cards.
>> Regards, Dmitry
>> On 01/09/2014 11:28 PM, Michael Quicquaro wrote:
>> My hardware is a Dell PowerEdge R820:
>> 4x Intel Xeon E5-4620 2.20GHz 8 core
>> 16GB RDIMM 1333 MHz Dual Rank, x4 - Quantity 16
>> Intel X520 DP 10Gb DA/SFP+
>> So in summary 32 cores @ 2.20GHz and 256GB RAM
>> ... plenty of horsepower.
>> I've reserved 16 1GB Hugepages
>> I am configuring only one interface and using testpmd in
>> rx_only mode to
>> first see if I can receive at line rate.
>> I am generating traffic on a different system which is running
>> the netmap
>> pkt-gen program - generating 64 byte packets at close to line
>> I am only able to receive approx. 75% of line rate and I see
>> the Rx-errors
>> in the port stats going up proportionally.
>> I have verified that all receive queues are being used, but
>> enough, it doesn't matter how many queues more than 2 that I
>> use, the
>> throughput is the same. I have verified with 'mpstat -P ALL'
>> that all
>> specified cores are used. The utilization of each core is
>> only roughly 25%.
>> Here is my command line:
>> testpmd -c 0xffffffff -n 4 -- --nb-ports=1 --coremask=0xfffffffe
>> --nb-cores=8 --rxd=2048 --txd=2048 --mbcache=512 --burst=512
>> --txq=8 --interactive
>> What can I do to trace down this problem? It seems very
>> similar to a
>> thread on this list back in May titled "Best example for showing
>> throughput?" where no resolution was ever mentioned in the thread.
>> Thanks for any help.
>> - Michael
More information about the dev