[dpdk-users] X710 DA2 (2x10G) performance 64B packet

Tomáš Jánský tomas.jansky at flowmon.com
Thu Mar 21 15:44:52 CET 2019


Thanks Paul for another suggestion.

This is my boot setup now:
default_hugepagesz=1G hugepagesz=1G isolcpus=1-4 nohz_full=1-4 rcu_nocbs=1-4

and I am using 16x 1GB hugepages on the NUMA node.
But so far no improvement.

Tomas

On Thu, Mar 21, 2019 at 3:31 PM Paul T <paultop6 at outlook.com> wrote:

> 1GB huge page chunks instead of 2MB would also be worth a try
>
> ------------------------------
> *From:* Tomáš Jánský <tomas.jansky at flowmon.com>
> *Sent:* 21 March 2019 14:00
> *To:* Paul T
> *Cc:* users at dpdk.org
> *Subject:* Re: [dpdk-users] X710 DA2 (2x10G) performance 64B packet
>
> Hi Paul,
>
> thank you for your suggestion.
> I tried isolating the cores; however, the improvement was negligible.
>
> Tomas
>
> On Thu, Mar 21, 2019 at 12:50 PM Paul T <paultop6 at outlook.com> wrote:
>
> Hi Tomas,
>
> I would isolate the CPUs in which the dpdk threads are running from the
> linux schedular.  The low packet drop at 64B makes me thing its context
> switching happen on the core because of the linux scheduler.
>
> Use the following command in the linux command line params in your grub
> config:
> isolcpus=cpus to isolate, e.g. 1,3,4 or 1-4
>
> Regards
>
> Paul
>
> Message: 3
> Date: Thu, 21 Mar 2019 10:53:34 +0100
> From: Tom?? J?nsk? <tomas.jansky at flowmon.com>
> To: users at dpdk.org
> Subject: [dpdk-users] X710 DA2 (2x10G) performance 64B packets
> Message-ID:
>         <CAPP7y6z13qFR-34+-Xn97ru5jOnaVAV7s=6WPgk_j=
> 9CLMQrSQ at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello DPDK users,
>
> I am having an issue concerning the performance of X710 DA2 (2x10G) NIC
> when using testpmd (and also l2fwd) application on both ports.
>
> HW and SW parameters:
> CPUs: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz x16
> Disabled hyperthreading.
> All used lcores and ports are on the same NUMA node (0).
> Hugepages: 1024x 2MB on the NUMA node 0.
> RAM: 64 GB
>
> DPDK version: 18.05.1
> Modue: IGB UIO
> GCC version: 4.8.5
>
> When using testpmd application only on one port:
> ./testpmd -b 0000:04:00.0 -n 4 --lcore=0 at 0,2 at 2 -- --socket-num=0
> --nb-cores=1 --nb-ports=1 --numa --forward-mode=rxonly
>
> 14.63 Mpps (64B packet length) - 0.01% packets dropped
>
> When using testmpd on both ports:
> ./testpmd -n 4 --lcore=0 at 0,2 at 2,4 at 4 -- --socket-num=0 --nb-cores=2
> --nb-ports=2 --numa --forward-mode=rxonly
>
> 28.08 Mpps (64B packet length) - 3.47% packets dropped
>
> Does anybody have an explanation why am I experiencing this performance
> drop?
> Any suggestion would be much appreciated.
>
> Thank you
> Tomas
>
>


More information about the users mailing list