[dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10%

Hanoch Haim (hhaim) hhaim at cisco.com
Tue Feb 14 13:31:56 CET 2017


Hi John, thank you for the fast response. 
I assume that Intel tests are more like rx->tx tests. 
In our case we are doing mostly tx, which is more similar to dpdk-pkt-gen 
The cases that we cached the mbuf was affected the most. 
We expect to see the same issue with a simple DPDK application

Thanks,
Hanoh

-----Original Message-----
From: Mcnamara, John [mailto:john.mcnamara at intel.com] 
Sent: Tuesday, February 14, 2017 2:19 PM
To: Hanoch Haim (hhaim); dev at dpdk.org
Cc: Ido Barnea (ibarnea)
Subject: RE: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10%



> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Hanoch Haim 
> (hhaim)
> Sent: Tuesday, February 14, 2017 11:45 AM
> To: dev at dpdk.org
> Cc: Ido Barnea (ibarnea) <ibarnea at cisco.com>; Hanoch Haim (hhaim) 
> <hhaim at cisco.com>
> Subject: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10%
> 
> Hi,
> 
> We (trex traffic generator project) upgrade DPDK version from 16.07 to
> 17.02arc-3 and we experienced a performance degradation on the 
> following
> NICS:
> 
> XL710  : 10-15%
> ixgbe   :  8% in one case
> mlx5    : 8% 2 case
> X710    :  no impact (same driver as XL710)
> VIC       : no impact
> 
> It might be related to DPDK infra change as it affects more than one 
> driver (maybe mbuf?).
> Wanted to know if this is expected before investing more into this. 
> The Y axis numbers in all the following charts (from Grafana) are 
> MPPS/core which means how many cycles per packet the CPU invest. 
> Bigger MPPS/core means less CPU cycles to send packet.

Hi,

Thanks for the update. From a quick check with the Intel test team they haven't seen this. They would have flagged it if they had. Maybe someone from Mellanox/Netronome could confirm as well.

Could you do a git-bisect to identify the change that caused this?

John



More information about the dev mailing list