[dpdk-users] The dpdk application performance degradation after moving to CentOs 7

Timur Bogdanov timurbogdanov at gmail.com
Wed Nov 6 10:21:25 CET 2019


Hi,

I encountered with a performance degradation of my dpdk based
application after moving from RedHat 6 to CentOs 7.

Previously application was compiled with dpdk-2.2.0 library, worked on
RedHat 6 and was capable to process up to 7 Gb/s on each port of Intel
x520-sr2 NIC without packet drops.

Now application is compiled with dpdk-17.11.6 library (the application
architecture itself have not been changed), works on CentOs 7 (
3.10.0-957.el7.x86_64) and NIC drops incoming packets (imissed) if
input traffic on each port is more than 6 Gb/s. (handlers process
packets more slowly I guess )

So, hardware, input traffic, application itself has not been changed
and on both Os's application uses isolated cores, but performance
degraded.

I found an article
https://ivanvari.com/solving-poor-performance-on-rhel-and-centos-7/
and changed tuned profile from latency-performance (used on RedHat 6)
to network-latency but it didn't helped a lot.

Also I tried to turn off some security patches related to
spectre/meltdown according to article
https://access.redhat.com/articles/3311301 but it also didn't helped.

Is it possible to eliminate this performance degradation somehow or
only one possible solution is to use faster CPU's and change
application architecture (adding more handlers per port)?


/Regards, Timur


More information about the users mailing list