[dpdk-dev] Performance regression in DPDK 1.8/2.0

De Lara Guarch, Pablo pablo.de.lara.guarch at intel.com
Tue Apr 28 13:31:07 CEST 2015



> -----Original Message-----
> From: Paul Emmerich [mailto:emmericp at net.in.tum.de]
> Sent: Monday, April 27, 2015 11:29 PM
> To: De Lara Guarch, Pablo
> Cc: Pavel Odintsov; dev at dpdk.org
> Subject: Re: [dpdk-dev] Performance regression in DPDK 1.8/2.0
> 
> Hi,
> 
> Pablo <pablo.de.lara.guarch at intel.com>:
> > Could you tell me how you got the L1 cache miss ratio? Perf?
> 
> perf stat -e L1-dcache-loads,L1-dcache-misses l2fwd ...
> 
> 
> > Could you provide more information on how you run the l2fwd app,
> > in order to try to reproduce the issue:
> > - L2fwd Command line
> 
> ./build/l2fwd -c 3 -n 2 -- -p 3 -q 2
> 
> 
> > - L2fwd initialization (to check memory/CPU/NICs)
> 
> I unfortunately did not save the output, but I wrote down the important
> parts:
> 
> 1.7.1: no output regarding rx/tx code paths as init debug wasn't enabled
> 1.8.0 and 2.0.0: simple tx code path, vector rx
> 
> 
> Hardware:
> 
> CPU: Intel(R) Xeon(R) CPU E3-1230 v2
> TurboBoost and HyperThreading disabled.
> Frequency fixed at 3.30 GHz via acpi_cpufreq.
> 
> NIC: X540-T2
> 
> Memory: Dual Channel DDR3 1333 MHz, 4x 4GB
> 
> > Did you change the l2fwd app between versions? L2fwd uses simple rx on
> 1.7.1,
> > whereas it uses vector rx on 2.0 (enable IXGBE_DEBUG_INIT to check it).
> 
> Yes, I had to update l2fwd when going from 1.7.1 to 1.8.0. However, the
> changes in the app were minimal.

Could you tell me which changes you made here? I see you are using simple tx code path on 1.8.0, 
but with the default values, you should be using vector tx, 
unless you have changed anything in the tx configuration.

Not sure also if you are using simple tx code path on 1.7.1 then, plus scattered rx.
(Without changing the l2fwd app, I use scattered rx and vector tx).

Thanks!
Pablo

> 
> 1.8.0 and 2.0.0 used vector rx. Disabling vector rx via DPDK .config file
> causes another 30% performance loss so I kept it enabled.
> 
> 
> 
> > Which packet format/size did you use? Does your traffic generator take
> into account the Inter-packet gap?
> 
> 64 Byte packets, full line rate on both ports, i.e. 14.88 Mpps per port.
> The packet's content doesn't matter as l2fwd doesn't look at it, but it was
> just some random stuff: EthType 0x1234.
> 
> 
> Let me know if you need any additional information.
> I'd also be interested in the configuration that resulted in the 20% speed-
> up that was mentioned in the original mbuf patch
> 
> Paul



More information about the dev mailing list