[dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0

Andrey Korolyov andrey at xdel.ru
Fri Feb 6 15:43:35 CET 2015


On Tue, Feb 3, 2015 at 8:21 PM, Andrey Korolyov <andrey at xdel.ru> wrote:
>> These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to?
>> By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked
>> patch doesn't change this, just the DPDK version.
>
> Sorry, I referred the wrong part there: bulk transmission, which is
> clearly not involved in my case. The idea was that the conditionally
> enabling prefetch for rx queues (BULK_ALLOC) may help somehow, but
> it`s probably will mask issue instead of solving it directly. By my
> understanding, strict drop rule should have a zero impact on a main
> ovs thread (and this is true) and work just fine with a line rate
> (this is not).
>
>>
>> Main things to consider are to isocpu's, pin the pmd thread and keep everything
>> on 1 NUMA socket. At 11 mpps without packet loss on that processor I suspect you are
>> doing those things already.
>
> Yes, with all tuning improvements I was able to do this, but bare
> Linux stack on same machine is able to handle 12Mpps and there are
> absolutely no hints of what exactly is being congested.

Also both action=NORMAL & action=output:<non-dpdk port> do manage flow
control in such a way that the generator side reaches line (14.8Mpps)
rate on 60b data packets, though very high drop ratio persists. With
action=DROP or action=output:X, where X is another dpdk port, flow
control establishes somewhere at the 13Mpps. Of course, using regular
host interface or NORMAL action generates a lot of context switches,
mainly by miniflow_extract() and emc_..(), the difference in a syscall
distribution between congested (line rate is reached) and
non-congested link is unobservable.


More information about the dev mailing list