[dpdk-users] Peformance troubleshouting of TCP/IP stack over DPDK.

Vincent Li vincent.mc.li at gmail.com
Tue May 26 18:50:38 CEST 2020


On Wed, 6 May 2020, Pavel Vajarov wrote:

> Hi there,
> 
> We are trying to compare the performance of DPDK+FreeBSD networking stack
> vs standard Linux kernel and we have problems finding out why the former is
> slower. The details are below.
> 
> There is a project called F-Stack <https://github.com/F-Stack/f-stack>.
> It glues the networking stack from
> FreeBSD 11.01 over DPDK. We made a setup to test the performance of
> transparent
> TCP proxy based on F-Stack and another one running on Standard Linux
> kernel.

I assume you wrote your own TCP proxy based on F-Stack library?

> 
> Here are the test results:
> 1. The Linux based proxy was able to handle about 1.7-1.8 Gbps before it
> started to throttle the traffic. No visible CPU usage was observed on core
> 0 during the tests, only core 1, where the application and the IRQs were
> pinned, took the load.
> 2. The DPDK+FreeBSD proxy was able to thandle 700-800 Mbps before it
> started to throttle the traffic. No visible CPU usage was observed on core
> 0 during the tests only core 1, where the application was pinned, took the
> load. In some of the latter tests I did some changes to the number of read
> packets in one call from the network card and the number of handled events
> in one call to epoll. With these changes I was able to increase the
> throughput
> to 900-1000 Mbps but couldn't increase it more.
> 3. We did another test with the DPDK+FreeBSD proxy just to give us some
> more info about the problem. We disabled the TCP proxy functionality and
> let the packets be simply ip forwarded by the FreeBSD stack. In this test
> we reached up to 5Gbps without being able to throttle the traffic. We just
> don't have more traffic to redirect there at the moment. So the bottlneck
> seem to be either in the upper level of the network stack or in the
> application
> code.
> 

I once tested F-Stack ported Nginx and used Nginx TCP proxy, I could 
achieve above 6Gbps with iperf. After seeing your email, I setup PCI 
passthrough to KVM VM and ran F-Stack Nginx as webserver 
with http load test, no proxy, I could  achieve about 6.5Gbps

> There is a huawei switch which redirects the traffic to this server. It
> regularly
> sends arping and if the server doesn't respond it stops the redirection.
> So we assumed that when the redirection stops it's because the server
> throttles the traffic and drops packets and can't respond to the arping
> because
> of the packets drop.

I did have some weird issue with ARPing of F-Stack, I manually added 
static ARP for F-Stack interface for each F-Stack process, not sure if it 
is related to your ARPing, see https://github.com/F-Stack/f-stack/issues/515 



More information about the users mailing list