[dpdk-users] Peformance troubleshouting of TCP/IP stack over DPDK.

Vincent Li vincent.mc.li at gmail.com
Wed May 27 18:44:49 CEST 2020



On Wed, 27 May 2020, Pavel Vajarov wrote:

>       > Hi there,
>       >
>       > We are trying to compare the performance of DPDK+FreeBSD networking stack
>       > vs standard Linux kernel and we have problems finding out why the former is
>       > slower. The details are below.
>       >
>       > There is a project called F-Stack <https://github.com/F-Stack/f-stack>.
>       > It glues the networking stack from
>       > FreeBSD 11.01 over DPDK. We made a setup to test the performance of
>       > transparent
>       > TCP proxy based on F-Stack and another one running on Standard Linux
>       > kernel.
> 
>       I assume you wrote your own TCP proxy based on F-Stack library?
> 
> 
> Yes, I wrote transparent TCP proxy based on the F-Stack library for the tests.
> The thing is that we have our transparent caching proxy running on Linux and
> now we try to find a ways to improve its performance and hardware requirements.
>  
>       >
>       > Here are the test results:
>       > 1. The Linux based proxy was able to handle about 1.7-1.8 Gbps before it
>       > started to throttle the traffic. No visible CPU usage was observed on core
>       > 0 during the tests, only core 1, where the application and the IRQs were
>       > pinned, took the load.
>       > 2. The DPDK+FreeBSD proxy was able to thandle 700-800 Mbps before it
>       > started to throttle the traffic. No visible CPU usage was observed on core
>       > 0 during the tests only core 1, where the application was pinned, took the
>       > load. In some of the latter tests I did some changes to the number of read
>       > packets in one call from the network card and the number of handled events
>       > in one call to epoll. With these changes I was able to increase the
>       > throughput
>       > to 900-1000 Mbps but couldn't increase it more.
>       > 3. We did another test with the DPDK+FreeBSD proxy just to give us some
>       > more info about the problem. We disabled the TCP proxy functionality and
>       > let the packets be simply ip forwarded by the FreeBSD stack. In this test
>       > we reached up to 5Gbps without being able to throttle the traffic. We just
>       > don't have more traffic to redirect there at the moment. So the bottlneck
>       > seem to be either in the upper level of the network stack or in the
>       > application
>       > code.
>       >
> 
>       I once tested F-Stack ported Nginx and used Nginx TCP proxy, I could
>       achieve above 6Gbps with iperf. After seeing your email, I setup PCI
>       passthrough to KVM VM and ran F-Stack Nginx as webserver
>       with http load test, no proxy, I could  achieve about 6.5Gbps
> 
> Can I ask on how many cores you run the Nginx?

I used 4 cores on the VM
 
> The results from our tests are from single core. We are trying to reach 
> max performance on single core because we know that the F-stack soulution 
> has linear scalability. We tested in on 3 cores and got around 3 Gbps which
> is 3 times the result on single core.
> Also we test with traffic from one internet service provider. We just 
> redirect few ip pools to the test machine for the duration of the tests and see
> at which point the proxy will start choking the traffic and the switch the traffic back.

I used mTCP ported apache bench to do load test, since the F-Stack and the 
apache bench are directed connected machine with cable and running 
capture on mTCP and F-Stack would affect performance, I do not have 
capture to see if there are significant packet drops or not when achieving 
6.5Gbps 

> 
>       > There is a huawei switch which redirects the traffic to this server. It
>       > regularly
>       > sends arping and if the server doesn't respond it stops the redirection.
>       > So we assumed that when the redirection stops it's because the server
>       > throttles the traffic and drops packets and can't respond to the arping
>       > because
>       > of the packets drop.
> 
>       I did have some weird issue with ARPing of F-Stack, I manually added
>       static ARP for F-Stack interface for each F-Stack process, not sure if it
>       is related to your ARPing, see https://github.com/F-Stack/f-stack/issues/515
> 
> Hmm, I've missed that. Thanks a lot for it because it may help for the tests and
> for the next stage.
> 
>  
> 
> 


More information about the users mailing list